Artificial Intelligence Raining from “The Cloud” on Ubiquitous Computers!


Welcome again! Today, we’re having a little chat about “Cloud Computing and its relation to both AI and Ubiquitous Computing”, a very interesting topic to me!

Cloud Computing

Cloud computing simply means that the programs you run and the data you store, are somewhere in a server around the world. You won’t bother yourself by storing any information on your personal computer or even use it to run a complicated program that requires sophisticated computers. Your personal computer will barely do nothing but upload the information to be processed or download the information you need to access. All the programs you will use will be web-based via the internet. Cloud computing is considered the paradigm shift following the shift from mainframe to client–server in the early 1980s.

If you look around you, you will figure out that cloud computing is taking over. Many of the desktop applications are turning to be web applications, as well as current Web Applications are getting more powerful. What really make good use of cloud computing nowadays are mobile cell phones, since they have relatively small processing powers and thus favor a lot from processing on the cloud instead. To understand more about the pros and cons of cloud computing visit this link.

Figure Illustrating Cloud Computing - Source :

The Effect of Cloud Computing on Computer Hardware Industry

I think that, Cloud Computing will lead to polarizing the computer hardware industry to 2 distinct poles: one pole is the giant servers that contain all the data and programs and work out all the processing of the clouds, and the other pole is the simple computer terminals with relatively minimal storage and processing power which use the clouds as their main storage and computation resource. This means that the hardware industry will not care about advancing personal computers’ hardware like it did before (as everything is done in the cloud)

AI as cloud-based services

Google has launched the cloud-based service Google Predication API that provides a simple way for developers to create software that learns how to handle incoming data. For example, the Google-hosted algorithms could be trained to sort e-mails into categories for “complaints” and “praise” using a dataset that provides many examples of both kinds. Future e-mails could then be screened by software using that API, and handled accordingly. (Technology Review Reference)

On the other hand, AI Solver Studios said they will be rolling out cloud computing services to allow instant access beside their desktop application AI Solver Studio. AI Solver Studio is a unique pattern recognition application that deals with finding optimal solutions to classification problems and uses several powerful and proven artificial intelligence techniques including neural networks, genetic programming and genetic algorithms.

How can Cloud Computing improve AI

Since Cloud Computing emphasizes that all the data as well as the programs running are stored somewhere in a cloud, this means that a large amount of data can be used for analysis and use by AI programs in order to perform data mining or other AI-related techniques to deduce useful information.

For example, consider the application, in which you have your own capacity to store on it what you need of posts and multimedia. If the data and the behavior of users – such as you – weren’t all stored in the cloud of, not enough data will be available to be used for AI purposes.

Thus I consider Cloud Computing to enhance the performance of AI by providing a lot of Data to be used by AI techniques.

Cloud Computing and Ubiquitous Computing

Cloud Computing is essential for Ubiquitous Computing (see my previous post to know more about it) to flourish. This is because most Ubiquitous computers will suffer from relatively limited hardware resources (due to their ubiquitous nature), this will make them really favor from the resources on a cloud in the internet.


There’s no doubt that merging the 2 trends (Ubiquitous and Cloud Computing) and supporting them with AI will result in tremendous technological advances. I think I will be talking about them more in the future!

Natural Language Processing – The Big Picture


Since I’m personally in hunger for big pictures for everything around me, and since natural language processing (NLP) is highly important for computers to control human kind, and since many of the AI-related careers depend on it, and since it is used extensively for commercial uses, For all those reasons, I will give a big picture about it (Yes, I mean NLP).

What’s Natural Language Processing?

The simple definition is obvious : making computers understand or generate a human text in a certain language, however, the complete definition is : a set of computational techniques for analyzing and representing naturally occurring texts (at one or more levels) for the purpose of achieving human-like language processing for a range of applications.

NLP can be done for any language of any mode or genre, for oral or written texts. It works over multiple types or levels of language processing starting from the level of understanding a word to the level of understanding the big picture of a complete book.

A brain with language samples. (Image courtesy of MIT OCW.)

Related Sciences?

Linguistics: focuses on formal, structural models of language and the discovery of language universals – in fact the field of NLP was originally referred to as Computational Linguistics.

Computer Science: is concerned with developing internal representations of data and efficient processing of these structures.

Cognitive Psychology: looks at language usage as a window into human cognitive processes, and has the goal of modeling the use of language in a psychologically plausible way.

Language Processing VS Language Generation

NLP may focus on language processing or generation. The first of these refers to the analysis of language for the purpose of producing a meaningful representation, while the latter refers to the production of language from a representation. The task of language processing is equivalent to the role of reader/listener, while the task of language generation is that of the writer/speaker. While much of the theory and technology are shared by these two divisions, Natural Language Generation also requires a planning capability. That is, the generation system requires a plan or model of the goal of the interaction in order to decide what the system should generate at each point in an interaction.

What are its sub-problems?

NLP’s performed by solving a number of sub-problems, where each sub-problem constitute a level (mentioned earlier). Note that, a portion of those levels could be applied, not necessarily all of them. For example some applications require the first 3 levels only. Also, the levels could be applied in a different order independent of their granularity.

Level 1 – Phonology : This level is applied only if the text origin is a speech. It deals with the interpretation of speech sounds within and across words. Speech sound might give a big hint about the meaning of a word or a sentence.

Level 2 –  Morphology : Deals with understanding distinct words according to their morphemes ( the smallest units of meanings) . For example, the word preregistration can be morphologically analyzed into three separate morphemes: the prefix “pre”, the root “registra”, and the suffix “tion”.

Level 3 – Lexical : Deals with understanding everything about distinct words according to their position in the speech, their meanings and their relation to other words.

Level 4 – Syntactic : Deals with analyzing the words of a sentence so as to uncover the grammatical structure of the sentence.

Level 5- Semantic : Determines the possible meanings of a sentence by focusing on the interactions among word-level meanings in the sentence. Some people may thing its the level which determines the meaning, but actually all the level do.

Level 6 – Discourse : Focuses on the properties of the text as a whole that convey meaning by making connections between component sentences.

Level 7 – Pragmatic : Explains how extra meaning is read into texts without actually being encoded in them. This requires much world knowledge, including the understanding of intentions, plans, and goals. Consider the following 2 sentences:

  • The city councilors refused the demonstrators a permit because they feared violence.
  • The city councilors refused the demonstrators a permit because they advocated revolution.

The meaning of “they” in the 2 sentences is different. In order to figure out the difference, world knowledge in knowledge bases and inferencing modules should be utilized.

What are the approaches for performing NLP?

Natural language processing approaches fall roughly into four categories: symbolic, statistical, connectionist, and hybrid. Symbolic and statistical approaches have coexisted since the early days of this field. Connectionist NLP work first appeared in the 1960’s.

Symbolic Approach: Symbolic approaches perform deep analysis of linguistic phenomena and are based on explicit representation of facts about language through well-understood knowledge representation schemes and associated algorithms. The primary source of evidence in symbolic systems comes from human-developed rules.

Statistical Approach: Statistical approaches employ various mathematical techniques and often use large text input to develop approximate generalized models of linguistic phenomena based on actual examples of these phenomena provided by the text input without adding significant linguistic or world knowledge. In contrast to symbolic approaches, statistical approaches use observable data as the primary source of evidence.

Connectionist Approach: Similar to the statistical approaches, connectionist approaches also develop generalized models from examples of linguistic phenomena. What separates connectionism from other statistical methods is that connectionist models combine statistical learning with various theories of representation – thus the connectionist representations allow transformation, inference, and manipulation of logic formulae. In addition, in connectionist systems, linguistic models are harder to observe due to the fact that connectionist architectures are less constrained than statistical ones.

NLP Applications

Information Retrieval – Information Extraction – Question-Answering – Summarization – Machine Translation – Dialogue Systems


Liddy, E. D. Natural Language Processing. In Encyclopedia of Library and Information Science. 2nd Ed. Marcel Decker, Inc.

Ubiquitous Computing and AI


OK !, Today I give a very big picture about that crucial term called “Ubiquitous Computing” and it’s relation to AI (YES, I mean Artificial Intelligence) . Actually, it’s a shame that any AI geek doesn’t know it. Just remember that, the word “Ubiquitous” means “Existing or being everywhere, or in all places, at the same time”.

What is Ubiquitous Computing?

I can say about Ubiquitous Computing (UbiComp) as “The Incorporation of computers into the background of human life without any physical interaction with them”. It’s considered the future third era of computing where the first era was the mainframes era and the second era is the personal computers era (what we are living now).   The term Ubiquitous Computing – also known as Calm Technology- was found by Mark Weiser -The father of Ubiquitous Computing- in the late 80s (so it’s not a new thing).

Ubiquitous Computing involves tens, hundreds or even thousands of different sized (often tiny) computers sensing the environment, deducing stuff, communicating and performing actions to help a human without actually using any interface. Thousands of computers should be embedded in everyday’s objects such as paper, pens, books, doors, buildings, walls, food containers, clothes, furniture, equipment … etc. to maintain a human’s life.

Even the most powerful notebook computer, with access to a worldwide information network, still focuses attention on a single box (the computer itself). However, Ubiquitous Computing means no human attention to any computer interface when using it. Take a look at motors technology which is considered ubiquitous; a glance through the shop manual of a typical automobile, for example, reveals twenty-two motors and twenty-five more solenoids. They start the engine, clean the windshield, lock and unlock the doors, and so on. By paying careful attention it might be possible to know whenever one activated a motor, but there would be no point to it. Similarly, computers in the Ubiquitous Computing era should be like that.

Suppose you want to lift a heavy object. You can call in your strong assistant to lift it for you, or you can be yourself made effortlessly, unconsciously, stronger and just lift it. There are times when both are good. Much of the past and current effort for better computers has been aimed at the former; ubiquitous computing aims at the latter.

An Example for life with Ubiquitous Computing.

AI and UbiComp ?

According to this essay, AI will play a major role in UbiComp in 3 different ways:

  1. Ubiquitous Computing needs a transparent interface to work, which means a natural way for communication with human-kind. This involves a lot of artificial intelligence such as gesture recognition, sound and speech recognition and computer vision.
  2. Ubiquitous Computing needs computers to be aware of their context such as location and time. Artificial Intelligence plays an important role in context awareness where it helps the computer perceive people’s location and generate proper service accordingly. For example, when you are at office, you may want to read some business reports, but when you go back home, you want to watch movie and enjoy coffee for a rest. These scenarios impose requirements to artificial intelligence agents.
  3. Ubiquitous Computing will also favor from automated learning from their past experience and capturing people’s experience. Learning agents are introduced into this framework to perceive people’s behavior and make decision based on people’s preference.


Shang, Yu Liang, The Role and Possibility of Artificial Intelligence in Ubiquitous Computing, 1993

Mark Weiser, “The Computer for the Twenty-First Century,” Scientific American, pp. 94-10, September 1991

Machines Uncontrollable !


Artificial Intelligence started in the fifties with a very optimistic vision. Scientists at that time predicted that machines outsmarting human will be built in 10-20 years.Unfortunately (or maybe luckily) this was far optimistic than reality was. AI Scientists after that went through an era of pessimism and thought that -i.e. machines outsmarting man -was impossible.

However, The optimistic era re-started some years ago. This was due to the great increase in computer’s power over the years which really proved Moore’s Law (computer power is doubled every 2 years) .

“AI Scientists worry machines may outsmart Man” is an article in the NewYork Times.They are debating whether there should be limits on research that might lead to loss of human control over computer-based systems.

Evil computer

Evil computer

Why worry ?

So what’s new in order to make scientists start to worry. Unstoppable Computer Viruses, Predator Drones ,self-driving cars, software-based personal assistants,service robots in the home and robots navigating the world are all signs of machines that would harm mankind easily if one of the following happened :

  • Man lost control over them.
  • They were used by criminals
  • Evil was programmed in them for research.
  • They had BUGS 🙂

The Singularity

Technological Singularity refers to the prediction that human will create machines that out-smart human, causing the end of the human era due to the control of machines. The idea of an “intelligence explosion” in which smart machines would design even more intelligent machines was proposed by the mathematician I. J. Good in 1965. To understand the idea more please watch The Matrix.

The power of forgetting nothing

I and a friend during Cairo ICT 2010 were chating with a researcher working with DARPA about this issue. I was saying that the singularity is very far away and that building extremely intelligent machines is impossible in the coming years. He – the researcher- stated that it’s not that far.Although it’s very hard for humans to invent algorithms that make the machines outsmart humans, the machines still have a power not found in humans : The Power of remembering everything 🙂 . According to him,This fact will lead the machines to learn extremely fast and outsmart humans in a small time.

Just Imagine,All the experiences and sciences a man learns in 40 years could be learned by a machine in -say- 7 months. I think you now can understand what i mean !

Strong, Weak and Fake AI !

Today -as always- i talk about an interesting topic 😛 . People of no technical background in AI can also favor from this post. So let’s start…


Many people use the word “AI” in a very generic form. However, Specialists have classified AI into “Strong AI” and “Weak AI” . I’ve added another category called “Fake AI” (I discovered later that some specialists really use it !) . I Will talk about the three categories in an ascending manner.

Fake AI

Fake AI … It’s the most AI that we encounter these days. Software that tries to pretend that it’s acting intelligently but it also fails.

An example of this is many past and some current games. Games with an AI based on “If Conditions” . Extremely static minded games that you can even “Imagine” its “AI code” while playing.  Of course Most of old Games were like that.

Fake AI is fake because of 2 main reasons :

  1. It depends on static actions and plans. No Randomization in actions.No learning from mistakes.No Planning.No Adaptation to new conditions ..etc.
  2. If it is a Game, it “Cheats”, meaning that it acquires more information or capabilities than a normal player in the same condition would have. An example of this is the “The Need for Speed” Game. If you are ranked first and you are out-speeding your opponents by a big distance, and in the same time the Game is set to “Hard” , the other opponent AI cars are given imaginary extensive speeds in order to overcome being late and they can even beat you before the race ends. Of Course this is done to “fake” the intelligence of the opponent. Another example occurs in most old strategy games, where the opponent knows everything about you and performs its actions according to what it-illegally- knows.

Also i consider many chating bots (ie : SW which chat with humans, see this famous bot ) fake AI. It is based on saying certain  sentences or asking certain questions when seeing certain words. It acquires new knowledge in poor limited manner.For example here is a conversation with another chating bot (source of conversation is here) :

Human : Women are all alike

Chating Bot :IN WHAT WAY ?

Human :They are always bugging us about something or other


Human :Well My girlfriend made me come here


Human :She and my mother value therapy


Human :Necessity is the mother of invention


The program looks fairly intelligent at first glance, but it is based on trickery! In fact, it responds to keywords, such as “computer” and “dream”, and produces standard replies. I have underlined the keywords in the sentences that the human typed above. If a sentence doesn’t contain any keywords, then Eliza either produces a non-committal output, such as “Tell me more”, or turns the human’s sentence into a question. The program is easy to fool as you can see in the last 2 sentences.

There are many other software with “Fake AI”, I hope you got the message from these last 2 examples.Oh ! one thing to tell : Many Games are now moving from what i call “Fake AI” to what is known “Weak AI”.

Weak AI (Applied AI,Narrow AI)

Weak AI is AI that acts intelligently in a specific domain, but doesn’t think intelligently like humans.It makes you feel in a semi-perfect manner that it’s human-like in the domain it is specialized in ONLY.

One example is: “A small portion” of Modern Games.  The Fifa 09 (not 2010 because they say its AI is terrible) can be considered imperfect weak AI. You can feel more human-like actions as you play. However it still needs much improvement to be “perfect” weak AI.

Another example is the game “Black and White 2” , The AI opponent is very adaptable and learns new strategies. However it’s still not Human-Like AI. And even if it’s Human-Like AI it would still be Weak AI because the intelligence is limited to the game only.

Also,Expert Systems are considered Weak AI, Since the intelligence is limited to the specification they are used to decide actions about.

Not only Fake AI but also Weak AI maybe supported by clever programming tricks and techniques that allow certain specific problems to be solved using current technology.Most People consider what i call “Fake AI” a portion of “Weak AI”. However  i managed to differentiate between them,

Weak AI is starting to spread over the humanity. However it is still in its beginning.

Strong AI

Let’s see what Wikipedia says about it :

Strong AI is artificial intelligence that matches or exceeds human intelligence — the intelligence of a machine that can successfully perform any intellectual task that a human being can. It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists. Strong AI is also referred to as “artificial general intelligence” or as the ability to perform “general intelligent action”.Science fiction associates strong AI with such human traits as consciousness, sentience, sapience and self-awareness.

Some references emphasize a distinction between strong AI and “applied AI” (also called “narrow AI”or “weak AI”): the use of software to study or accomplish specific problem solving or reasoning tasks that do not encompass (or in some cases are completely outside of) the full range of human cognitive abilities.

So, back to Omar, Strong AI is the dream of every AI Scientist or geek. We are so far from it by now. It’s unpredictable when will Strong AI be available. It’s only thought about by Extremely Unique AI Scientists and Philosophers.

Strong AI should be tested by the famous “Alan Turing Test”, in which a human should communicate with both a machine and another human in the same time but the 3 are in different places. If he couldn’t tell which is the machine and which is the human then the test succeeded. Otherwise the test fails.

Cognitive Simulation

Cognitive Simulation could be another category in AI Research. Yet I didn’t add it to the title because it’s a different topic by itself.

So what is Cognitive Simulation ? The answer comes from  an Article in AlanTuring.Net :

In cognitive simulation, computers are used to test theories about how the human mind works–for example, theories about how we recognise faces and other objects, or about how we solve abstract problems (such as the “missionaries and cannibals” problem described later). The theory that is to be tested is expressed in the form of a computer program and the program’s performance at the task–e.g. face recognition–is compared to that of a human being. Computer simulations of networks of neurons have contributed both to psychology and to neurophysiology


Although Fake AI is fake, but it’s still important in SW because many SW (apart from games) doesn’t even require Weak AI in order to work well .

The Technology industry is starting to rely on Weak AI in many stuff.

Strong AI is still a dream. However it’s approaching in a steady manner.

I think Cognitive Simulation helps in achieving Strong AI. Because If we understood how the human mind works in details, we could apply it more to computers. Besides its big favor to psychology and to neurophysiology.

Bootstrapping and Artificial Intelligence


Bootstrapping : An expression i find extremely interesting. Bootstrapping originally means the impossible action of one lifting himself using his own bootstraps (The one shown in the figure) . Bootstrapping -also named booting- has been used in a wide range of scientific terms in Science in general; especially Computer Science.


A Man Bootstrping himself

Examples of Applying bootstrapping :

In Bussiness, Bootstrapping is to start a business without external help/capital.

In Statistics, Bootstrapping is a resampling technique used to obtain estimates of summary statistics.

In Computing in general, Bootstrapping is the summary of the process of a simple computer system activating a more complicated computer system.

In Compilers, Bootstrapping is writing a compiler for a computer language using the computer language itself to code the compiler.

In Networks, A Bootstrapping Node is a network node that helps newly joining nodes successfully join a P2P network.

In linguistics, Bootstrapping is a theory of language acquisition.

Bootstrapping And AI …

Bootstrapping in AI is using a week learning method to provide the starting information for a stronger learning method. For Example, Consider a Classifier that classifies a set of samples . It Uses Clustering (“week” Unsupervised Learning ) to estimate the cluster of each sample, then considers the estimated cluster for each sample its REAL class in the next “stronger” supervised learning  which can finally achieve high performance.

So As u see, The Classifier has built estimates based on its OWN estimates. It has predicted new stuff based on its OWN previous predictions.

Bootstrapping and Reinforcement Learning

Since Reinforcement Learning is based on Dynamic Programming.  Bootstrapping plays an important role here too.

Reinforcement Learning Methods are mainly classified into its 2 practical techniques : Monte-Carlo Methods and Temporal-Difference Methods.

In Monte-Carlo Methods the agent doesn’t receive any reward for its actions except after its goal is achieved. So No Bootstrapping here occurs because the reward is a “REAL ASSURED” reward.

On The contrary in Temporal-Difference Methods, The Agents receives rewards after every action it does by estimating whether this action has made the goal closer or not.These “estimated” rewards affect its future actions. Thus the Agent using Temporal-Difference Learning bootstraps all the way until it achieves it desired goal.

In The End …

Bootstrapping is vital to Machine Learning because it increases the speed of learning and makes machine learning resemble human learning.

This was just an introduction. Maybe We’ll talk about it more later. But Now We have more interesting stuff to talk about 😉