Onboarding SW Engineers: The first week


Onboarding new Engineers is a frequent task done in SW based companies, especially with the high turn over in the SW oriented industries, and it’s often a task not always done right. In this article I will try to talk about what topics a technical team lead and/or a project manager should keep in mind when onboarding a new SW engineer. The goal is for the new engineer to acquire the big picture of what’s going on around them, but without digging much into details. It can sometimes take an engineer months or even years to collect this information by themselves, because they’re concentrating on their low level tasks. However, if this information is directly provided by the person responsible for the onboarding, this will definitely improve their understanding, judgment and satisfaction level from the very first week at work.

Below are the topics I think it’s useful to talk about:

Industry Brief : 1 to 2 hours

This is about basic business information: understanding what’s the goal of the industry, how big is it, who are the big players, who are the competitors, who are the vendors and partners, what’s the current goal of the company, and most importantly, what is its revenue model. Having this information stored with an abstract level of details and presented to any new hire is definitely useful.

General SW Features Brief: 1 hour to 2 days

In some companies, an engineer is hired to work on a specific component of a SW while rarely touching other parts of the SW. However, it’s important for them to understand, not only the features of the components they’re working on but also all features provided by the SW, in an abstract way. This will help them take better decisions in their own job. Depending on the complexity of the SW, this can take from hours to days depending on the SW, but it shouldn’t take too much time or else it means that too much details are provided, that are not useful in any way to the engineer.

System Architecture Brief: 1 to 4 hours

An engineer needs to be aware of how the different components interface or communicate with each other, what services are used and why, what is the flow of the data. This can be simply provided by a number of diagrams, without getting into much details.

Component Code Brief: 1 to 5 hours

The goal of this is to introduce the new hire to the code that they’ll be working on. Having an idea about the code architecture, how the different classes are designed and what configuration files are used is crucial for the new hire to get started on their daily tasks. Part of this is also, how to compile the code and install any required SW to achieve that. Also, how is the code usually debugged, where are the log files and how do we interpret them. Part of this information can be documented but a lot of it is better presented in a meeting. The process of actually compiling and running the code can actually take days in some environments and the new hire will work on this with the help of the documentation and their collegues.

Internal & External Tools Brief: 1 hour to 4 hours

I’m not talking here about the training of the new hire on the deails of those tools, I’m just talking about understanding the big picture of what tools are used and why, which can be presented in the first week for the new hire. Internal tools are tools created by the company for various reasons and only used internally. External tools are provided by other vendors and are usually used during everyday work for people, process, task and code management.

Development Processes Brief: 30 minutes to 2 hours

How do we submit our code changes? How do we plan our work? When do we estimate our tasks? How are code changes reviewed and approved? What’s the flow of a task on the task management SW used ? When and how is the testing performed, and what automated tests do we have ? How do you communicate with the project stakeholders?

Branching and CI/CD Brief: 1 hour

What are the branches used and what’s the use of each branch? What kind of CI/CD is done on each branch.

Acknowledge well known challenges

It’s useful to inform the new hire about any well known engineering problems or challenges, to let him know that it’s being worked on, and also ask him to contribute on solving it. This will let the new hire think : “Those people are facing some challenges but they are aware about it, and doing what’s in their best to to tackle them” . That’s better than the new hire thinking: “There’s lot of issues in this company and nobody cares” . An example of that is, a buggy legacy code component that is too risky to refactor, an old technology being used that is too time-costly to upgrade, or a component that has no automated tests.

Training on specific technologies and tools: Days to weeks

This is out of the scope of the first week onboarding I’m talking about in this article, however, it’s important to mention. Let’s face it, but in the SW world new technologies arise every day and you will never find the perfect candidate who is aware about every technology they’re supposed to use. Therefore it’s expected that new hires will need to acquire some new skills. Most of the time a book or an online course will provide a good introduction, but sometimes a professional training course is required. In the first couple of weeks, it’s nice to figure out what training is needed for the new hire.

Top 6 Useful C++20 Features


C++20 was published in December 2020, with various updates to the language as well as the standard library. In this post I will talk about the top 6 features that I found to be useful for application developers. Some of those features are not yet fully supported yet.

String Formatting

This library feature makes string formatting much easier in C++. Before, we had 2 older ways for formatting strings:

Old Way #1: Using sprintf

int a = 5;
char buffer [50];
n=sprintf_s (buffer, 50, "%d", a);
std::string s = buffer;

One major drawback of using the above code is that it’s not modern C++, but more of C requiring the use of C strings. Another drawback is that you need to carefully handle the buffer size to avoid overflows.

Old Way #2: Using std::stringstream

int a = 5;
std::stringstream sstr;
sstr << a;
std::string s = sstr.str();

The drawback of this way is that it’s not quite readable, and not really the best way to format strings.

C++20: Using std::format

std::string s = std::format("Hello {} {}!\n", "world", 2021);

Similar to what folks do in C# or Python, this does the job neatly. It is based on a library called fmt::format and it’s not supported on any C++ implementation till the time of writing this post, but they’re working on it!

Concepts and constraints

This language feature enhances the way compile errors are discovered. When it comes to compile error messages, C++ compilers provides some of the least understandable error message descriptions, which can easily cause lot of developer frustration, especially for beginners. This problem is often amplified when using C++ templates in generic programming.

Using constraints and concepts on your generic classes will allow the compiler to give better understandable error messages, saving your time. Consider the following code:

#include <string>

struct empty_struct {};

template<typename T>
void f2(T t) {
    size_t hash = std::hash<T>{}(t);   
}

template<typename T>
void f(T t) {

    //Bla Bla 
    f2(t);

    //Bla Bla
}
 
int main() {
  using std::operator""s;
  f("abc"); // OK
//f(empty_struct {});  // Error: 'std::hash<T>': no appropriate default constructor available

}

Using the above code, because empty_struct cannot be hashed and cannot be passed to std::hash, the compiler will give an error in function f2() and not in the main() function, causing confusion, especially if the code is more complex than this .

Constraints and concepts solve this issue by making sure that the type used in the template satisfies whatever constraints you put.

#include <string>
#include <concepts>

struct empty_struct {};

//Concept to be used by constraint in template functions below
template<typename T>
concept Hashable = requires(T a) {
    { std::hash<T>{}(a) } -> std::convertible_to<std::size_t>;
};

template<typename T> requires Hashable<T>
void f2(T t) {
    size_t hash = std::hash<T>{}(t);
}

template<typename T> requires Hashable<T>
void f(T t) {

    //Bla Bla 
    f2(t);

    //Bla Bla
}
 
int main() {
  using std::operator""s;
  f("abc"); // OK, std::string satisfies Hashable
//f(empty_struct {}); // 'f': no matching overloaded function found. 
                                //'f': the associated constraints are not satisfied
}

In the above code, we created a concept Hashable that is defined by an object that produces size_t when passed to std::hash. Then we add a constraint to both f() & f2() to use this concept on type T.

As per this date, most of the logic for concepts and constraints is supported in major C++ implementations.

Modules

This language feature greatly decreases compilation time as well as other compilation issues.

As mentioned here, modules are set of source code files that are compiled independently from the code that imports them. They reduce much of the issues faced when including header files. Macros, preprocessor directives and non-exported names defined inside the module are not visible in the code that uses the module. Each module is compiled once into a binary file and then used from multiple parts of the code resulting in much faster compile times.

import std.threading;

int main() {

	std::thread th;
	std::mutex m;
}

As shown above, one module can take the place of multiple related header files. std.threading includes all header files: <atomic>, <condition_variable>, <future>, <mutex>, <shared_mutex>, and <thread>

As per this date, C++ modules are partially supported in major C++ implementations.

JThreads

The new library feature std::jthread (Joining Thread) gives you out of the box enhancements not found in std::thread, it changed the nasty behavior of std::thread of not joining when destructor is called (causing lots of hidden bugs), it also helps you control the cancellation of the thread in a much easier way. JThread is supported in major C++ implementations.

The code snippet below shows how it’s automatically stopped on destructor call.

import std.threading;
import std.core;
using namespace std::literals::chrono_literals;

void jthread_graceful_auto_stop() {
    //Lambda takes a stop token
    std::jthread jt([](std::stop_token stoken) {
        while (!stoken.stop_requested()) {
            //Working...
            std::cout << "Working ..." << std::endl;
            std::this_thread::sleep_for(1s);
        }
        std::cout << "Stopped after getting stop request" << std::endl;
        });

    std::this_thread::sleep_for(5s);
    //Destructor automatically sets the token source to request stop.
}

The code below shows how manual stopping can be done:

import std.threading;
import std.core;
using namespace std::literals::chrono_literals;

void jthread_graceful_manual_stop() {
    //Lambda takes a stop token
    std::jthread jt([](std::stop_token stoken) {
        while (!stoken.stop_requested()) {
            //Working...
            std::cout << "Working ..." << std::endl;
            std::this_thread::sleep_for(1s);
        }
        std::cout << "Stopped after getting stop request" << std::endl;
        });

    std::this_thread::sleep_for(5s);

    std::stop_source ss= jt.get_stop_source();
    
    std::cout << "requesting thread stop" << std::endl;

    ss.request_stop();

    jt.join();

    std::cout << "Finished waiting thread to stop" << std::endl;
}

Coroutines

A coroutine is a function that can be suspended, returning multiple times, and potentially being called from a different threads, though most probably it will be one thread. When each call is done, the state of the function is reserved so that the next call to the function starts from the same point it stopped at last time.

A C++ function is a coroutine if it uses: co_await, co_yield or co_return.

Coroutines can be considered the most advanced feature in C++20. As per this date, I don’t think it’s quite ready for normal C++ application developers to use as it’s quite complicated to implement using pure STL without using third part libraries such as cppcore. AFAIK, Functionalities of cppcore should be available in STL in future C++ implementations.

The code below shows how to implement a coroutine that uses co_yield using pure STL:

#include <stdio.h>
#include <coroutine>

using namespace std;

struct Generator {
	struct Promise;

	// compiler looks for promise_type
	using promise_type = Promise;
	coroutine_handle<Promise> coro;
	Generator(coroutine_handle<Promise> h) : coro(h) {}

	~Generator() {
		if (coro)
			coro.destroy();
	}
	// get current value of coroutine
	int value() {
		return coro.promise().val;
	}

	// advance coroutine past suspension
	bool next() {
		coro.resume();
		return !coro.done();
	}
	struct Promise {
		// current value of suspended coroutine
		int val;

		// called by compiler first thing to get coroutine result
		Generator get_return_object() {
			return Generator{ coroutine_handle<Promise>::from_promise(*this) };
		}

		// called by compiler first time co_yield occurs
		suspend_always initial_suspend() {
			return {};
		}

		// required for co_yield
		suspend_always yield_value(int x) {
			val = x;
			return {};
		}
		// called by compiler for coroutine without return
		suspend_never return_void() {
			return {};
		}
		// called by compiler last thing to await final result
// coroutine cannot be resumed after this is called
		suspend_always final_suspend() const noexcept {
			return {};		
		}

		void unhandled_exception() {}

	};

};

Generator fibCoroutineFunc(int n) {

	std::uint64_t a = 0, b = 1;
	while (true)
	{
		co_yield b;
		auto tmp = a;
		a = b;
		b += tmp;
	}
}

int main()
{
	int n = 10;

	for (Generator myCoroutineResult = fibCoroutineFunc(n); myCoroutineResult.next(); ) {
		printf("%d ", myCoroutineResult.value());
	}

	return 0;
}

The code below does the same using cppcore but in a much neater way:

cppcoro::generator<const std::uint64_t> fibonacci()
{
  std::uint64_t a = 0, b = 1;
  while (true)
  {
    co_yield b;
    auto tmp = a;
    a = b;
    b += tmp;
  }
}

void usage()
{
  for (auto i : fibonacci())
  {
    if (i > 1'000'000) break;
    std::cout << i << std::endl;
  }
}

Calendar & Timezone

The 2 new classes will be part of the std:chrono library to aid with date and time operations. As per this date, they are partially supported in C++ implementations. You can know their details here.

Question Detector for Twitter & Instant Messages


I looked online for a ready made question detector but I couldn’t find any, so i decided to code my own and post it online.

This question detector (in python) can work with any sentence but it’s designed specifically for twitter or instant messages (IM) . It mainly relies on the following:

  • The following assumption is made: any sentence containing a question mark is considered a question.
  • Using a twitter word tokenizer from NLTK to make sure all sentences classified as questions do contain at least one of such key words: “what, why, how, when, where, did, do, does, have, has, am, is, are, can, could, may , would, will, ? ..etc.”. It’s highly unlikely that a question does not have one of these tokens.
  • Using a naive bayes classifier on NLTK corpus ‘nps_chat’, which – alone has got an accuracy of 67% when cross validating it.

I have tested this detector on a small data set, getting an accuracy of 93%.

import nltk.corpus
from nltk.corpus import nps_chat
from nltk.tokenize import TweetTokenizer

class QuestionDetector():

    #Class Initialier:
    #- Creates naive bayes classifier using nltk nps_chat corpus.
    #- Initializes Tweet tokenizer
    #- Initializes question words set to be used
    def __init__(self):
        posts = nltk.corpus.nps_chat.xml_posts()
        featuresets = [(self.__dialogue_act_features(post.text), post.get('class')) for post in posts]
        size = int(len(featuresets) * 0.1)
        train_set, test_set = featuresets[size:], featuresets[:size]
        self.classifier = nltk.NaiveBayesClassifier.train(train_set)
        Question_Words = ['what', 'where', 'when','how','why','did','do','does','have','has','am','is','are','can','could','may','would','will','should'
"didn't","doesn't","haven't","isn't","aren't","can't","couldn't","wouldn't","won't","shouldn't",'?']
        self.Question_Words_Set = set(Question_Words)
        self.tknzr = TweetTokenizer()
    #Private method, Gets the word vector from sentance
    def __dialogue_act_features(self,sentence):
         features = {}
         for word in nltk.word_tokenize(sentence):
             features['contains({})'.format(word.lower())] = True
         return features
    #Public Method, Returns 'True' if sentance is predicted to be a question, returns 'False' otherwise
    def IsQuestion(self,sentence):
        if "?" in sentence:
            return True
        tokens = self.tknzr.tokenize(sentence.lower())
        if self.Question_Words_Set.intersection(tokens) == False:
            return False
        predicted = self.classifier.classify(self.__dialogue_act_features(sentence))
        if predicted == 'whQuestion' or predicted == 'ynQuestion':
            return True
        
        return False

Question vs Query

Note that a question is simply an interrogative sentence does not necessarily imply a query, According to this paper from Google Research[1],  below are the 6 different types of interrogative sentences:

  1. Advertisement. This kind of tweets ask questions to the reader and deliver advertisements in the following. E.g., ‘ Incorporating your business this year? Call us today for a free consultation with one of our attorneys. 855- 529-8753. http://buz.tw/FjJCV&#8217;
  2. Article or News Title on the Web. These tweets post article names or news titles together with the links to the webpage. E.g., ‘New post: Pregnancy Miracle – A Miracle or a Scam? http://articlescontentonline.com/pregnancy-miracle-amiracle-or-a-scam&#8217;
  3. Question with Answer. These tweets contain questions followed by their answers. E.g., ‘ I even tried staying away from my using my Internet for a couple hours. The result? Insanity’
  4. Question as Quotation. These tweets contain questions in quoted sentences as references to what other people said. E.g., ‘I think Brian’s been drinking in there because I’m hearing him complain about girls, and then he goes “Wright, are you sure you’re not gay?’
  5. Rhetorical Question. This kind of tweets include rhetorical questions, which seem to be questions but without the expectation of any answer. In another words, these tweets encourage readers to think about the obvious answers. E.g., ‘ You ruined my life and I’m supposed to like you’
  6. Qweet. (Queries) These kinds of tweets ask for some information or help. E.g., ‘ What’s your favorite Harry Potter scene?’
    Tweet author posts a question asked by someone on the web, e.g., CQA portals, forums, etc. The following is an example: ‘Questions about panda update. When will the effect end? http://goo.gl/fb/iiRjn&#8217;

References

[1] Baichuan Li, Xiance Si , Michael R. Lyu , Irwin King, and Edward Y. Chang  – Question Identification on Twitter – 2011

Labels found useful for Native C++ Development


Using labels in modern programming is often considered a bad practice, as it results in spaghetti (unmaintainable) code. However, I often find it very useful for native C/C++ programming, where no exceptions are used (error handling using return values is extensively used) and memory management is needed.It increases the readability and maintainability of the code to a great extent.

The following code does not utilize a label, it has :

1- A lot of Nested If Loops, causing bad readability.

2- Redundant memory cleaning code, causing bad maintainability.

DWORD Foo_NoLabel() {

DWORD    dwErr = NO_ERROR;
Boo      *pBoo = NULL;
Soo      *pSoo= NULL;
Doo      *pDoo= NULL;
Loo      *pLoo= NULL;

dwErr = GetBoo(&amp;pBoo);
if (dwErr == NO_ERROR)  {
    dwErr = GetSoo(&amp;pSoo);
    if (dwErr == NO_ERROR) {
        dwErr = GetDoo(&amp;pDoo);
        if (dwErr == NO_ERROR) {
            dwErr = GetLoo(&amp;pLoo);
            if (dwErr == NO_ERROR) {
                //Great!
            }
            else {
                //Oops!
                delete pBoo;
                delete pSoo;
                delete pDoo;
            }
        }
        else {
            //Oops!
            delete pBoo;
            delete pSoo;
        }
     }
    else {
        //Oops!
        delete pBoo;
     }
}

if (dwErr == NO_ERROR) {
    dwErr = UseAll(pBoo,pLoo,pDoo,pSoo);

    delete pBoo;
    delete pSoo;
    delete pDoo;
    delete pLoo;
}

return dwErr;

}

This much cleaner code uses a CleanUp label:


DWORD Foo_UsingLabel() {

DWORD    dwErr = NO_ERROR;
Boo      *pBoo = NULL;
Soo      *pSoo= NULL;
Doo      *pDoo= NULL;
Loo      *pLoo= NULL;

dwErr = GetBoo(&amp;pBoo);
if (dwErr != NO_ERROR)  { goto CleanUp;}

dwErr = GetSoo(&amp;pSoo);
if (dwErr != NO_ERROR)  { goto CleanUp;}

dwErr = GetDoo(&amp;pDoo);
if (dwErr != NO_ERROR)  { goto CleanUp;}

dwErr = GetLoo(&amp;pLoo);
if (dwErr != NO_ERROR)  { goto CleanUp;}

dwErr = UseAll(pBoo,pLoo,pDoo,pSoo);

CleanUp:
if (pBoo) delete pBoo;
if (pSoo) delete pSoo;
if (pDoo) delete pDoo;
if (pLoo) delete pLoo;

return dwErr;
}

 

Representing a state in a compact way


Many algorithms depend on representing the states of many objects in the domain world in the smallest way ever.

This is some C++ code to do this:

int GetWorldState(vector&amp;lt;Obj&amp;gt; *Objs)
{
int state = 0;

for(int i = 0 ; i &amp;lt;Objs-&amp;gt;size(); i++)
{
//Shift the next bit to be initialized
state &amp;lt;&amp;lt;= 1;

//Get the current object
Obj * ch = &amp;amp;Objs-&amp;gt;operator [](i);

//Fill the bit if object has a special property, otherwise don't
if ( Obj-&amp;gt;isLeft ) state |= 1;
}

//Return the final result
return state;
}

This idea can be generalized to include complex objects.

Software Engineering and AI: The Gigantic Picture


Introduction

AI research aims to devise techniques to make the computer perceive, reason and act. On the other hand, Software Engineering (SWE) aims to support humans in developing large software faster and more effectively. This article gives the gigantic picture on the relationship between both SWE and AI and how can they contribute to each other.

Software Engineering

The main concern of SWE is the efficient and effective development of high-qualitative and mostly very large software systems. The goal is to support software engineers and managers in order to develop better software faster with (intelligent) tools and methods.

How do they overlap?

Both deal with modeling real objects from the real world like business processes, expert knowledge or process models.

How can AI contribute to SWE Research?

1)    Translation of informal description of requirements to formal descriptions: Using natural language processing.

2)    Code Auto-Generation: Generating code from detailed design descriptions.

3)    AI Testing: The diversity of test cases while testing a SW may cause a buggy release (as not all test cases could be applied). AI’s role here is applying only the sufficient test cases (instead of all the test cases) – just as humans do – to save time.

4)  Software Size Estimation: Estimating the size of a proposed SW Project using Machine Learning techniques.

How can SWE contribute to AI Research?

1)    Systematic Development of AI Applications

2)    Operating of AI Applications in real-life environments

3)    Maintaining and improving AI applications.

Intersection between AI and SWE ( from: AI and SWE - Current and Future Trends - Jörg Rech & Klaus-Dieter Althoff )

Sciences lying in the intersection

Agent Oriented SWE – Knowledge Based SWE – Computational IntelligenceAmbient Intelligence

References

Artificial Intelligence and SWE – Status and Future Trends

Artificial Intelligence Raining from “The Cloud” on Ubiquitous Computers!


Introduction

Welcome again! Today, we’re having a little chat about “Cloud Computing and its relation to both AI and Ubiquitous Computing”, a very interesting topic to me!

Cloud Computing

Cloud computing simply means that the programs you run and the data you store, are somewhere in a server around the world. You won’t bother yourself by storing any information on your personal computer or even use it to run a complicated program that requires sophisticated computers. Your personal computer will barely do nothing but upload the information to be processed or download the information you need to access. All the programs you will use will be web-based via the internet. Cloud computing is considered the paradigm shift following the shift from mainframe to client–server in the early 1980s.

If you look around you, you will figure out that cloud computing is taking over. Many of the desktop applications are turning to be web applications, as well as current Web Applications are getting more powerful. What really make good use of cloud computing nowadays are mobile cell phones, since they have relatively small processing powers and thus favor a lot from processing on the cloud instead. To understand more about the pros and cons of cloud computing visit this link.

Figure Illustrating Cloud Computing - Source : http://www.briankeithmay.com

The Effect of Cloud Computing on Computer Hardware Industry

I think that, Cloud Computing will lead to polarizing the computer hardware industry to 2 distinct poles: one pole is the giant servers that contain all the data and programs and work out all the processing of the clouds, and the other pole is the simple computer terminals with relatively minimal storage and processing power which use the clouds as their main storage and computation resource. This means that the hardware industry will not care about advancing personal computers’ hardware like it did before (as everything is done in the cloud)

AI as cloud-based services

Google has launched the cloud-based service Google Predication API that provides a simple way for developers to create software that learns how to handle incoming data. For example, the Google-hosted algorithms could be trained to sort e-mails into categories for “complaints” and “praise” using a dataset that provides many examples of both kinds. Future e-mails could then be screened by software using that API, and handled accordingly. (Technology Review Reference)

On the other hand, AI Solver Studios said they will be rolling out cloud computing services to allow instant access beside their desktop application AI Solver Studio. AI Solver Studio is a unique pattern recognition application that deals with finding optimal solutions to classification problems and uses several powerful and proven artificial intelligence techniques including neural networks, genetic programming and genetic algorithms.

How can Cloud Computing improve AI

Since Cloud Computing emphasizes that all the data as well as the programs running are stored somewhere in a cloud, this means that a large amount of data can be used for analysis and use by AI programs in order to perform data mining or other AI-related techniques to deduce useful information.

For example, consider the WordPress.com application, in which you have your own capacity to store on it what you need of posts and multimedia. If the data and the behavior of users – such as you – weren’t all stored in the cloud of WordPress.com, not enough data will be available to be used for AI purposes.

Thus I consider Cloud Computing to enhance the performance of AI by providing a lot of Data to be used by AI techniques.

Cloud Computing and Ubiquitous Computing

Cloud Computing is essential for Ubiquitous Computing (see my previous post to know more about it) to flourish. This is because most Ubiquitous computers will suffer from relatively limited hardware resources (due to their ubiquitous nature), this will make them really favor from the resources on a cloud in the internet.

Conclusion

There’s no doubt that merging the 2 trends (Ubiquitous and Cloud Computing) and supporting them with AI will result in tremendous technological advances. I think I will be talking about them more in the future!

My First Research Paper ! (To Be Published)


Greetings ! I wanted to share with you my first AI-related paper which will be published soon. I might (or maybe not) upload the whole paper in another post later to gain your reviews, but for now, I’m showing the abstract and keywords.

Abstract

Research in learning and planning in real-time  strategy (RTS) games is  very  interesting  in  several industries  such as military industry, robotics,  and most importantly game  industry.    A  Recent  published  work on online  case-based  planning in RTS Games does not include the capability of online  learning from experience, so the knowledge certainty remains  constant,  which leads to inefficient decisions. In this  paper,  an  intelligent agent model based on both online case-based planning  (OLCBP)  and reinforcement learning  (RL)  techniques  is  proposed.  In addition, the proposed model has been evaluated  using empirical simulation on Wargus (an open-source clone of  the well known Real-Time Strategy Game Warcraft 2).   This  evaluation shows that the proposed model increases the certainty  of the  case  base  by learning from experience, and hence the  process of decision making  for selecting more efficient, effective  and successful plans.

Keywords

Case-based  reasoning Reinforcement  LearningOnline  Case-based  Planning, Real-Time Strategy Games,  Sarsa (λ) learning, Eligibility Traces, Intelligent Agent.