Artificial Intelligence

version of this report

You must have Adobe Acrobat reader to view, save, or print PDF files. The
reader is available for free

Artificial Intelligence

by James G. Barr

Docid: 00021010

Publication Date: 2211

Report Type: TUTORIAL


Artificial intelligence is the simulation of human intelligence
processes, especially learning and adaptive behavior, by
machines. Among other applications, AI is employed to analyze big
data in order to create competitive advantages by extracting useful
knowledge from unstructured information. Critics of unregulated AI
believe that it poses a threat to society – a bits and bytes version of
the asteroid that killed the dinosaurs 65 million years ago.

Report Contents:

Executive Summary

[return to top of this

“Intelligence” is generally defined as the ability to learn or
understand, or to deal with new or trying situations.1 A
trait normally associated with biological beings such as chimpanzees,
dolphins, and, of course, humans, recent scientific and engineering
developments have enabled computers to exercise “artificial intelligence”
or AI. Artificial intelligence is “the simulation of human
intelligence processes [especially learning and adaptive behavior] by

Figure 1. Yes, Machines Can Be Smart Too

Figure 1. Yes, Machines Can Be Smart Too

Source: Wikimedia Commons

Faulkner Reports
Computational Linguistics Tutorial

Also known as “cognitive computing,” artificial intelligence is powering
a wide variety of business and consumer applications, such as sifting
through mountains of big data to extract precious business intelligence,
or permitting a vehicle to drive itself.

Machine Learning

The most prominent of AI technologies is “machine learning” (ML), which
enables a system to enhance its awareness and capabilities – that is, to
learn – without being explicitly programmed to do so.In some cases, ML
systems learn by studying information contained in data warehouses. In
other cases, ML systems learn by conducting thousands of data simulations,
detecting patterns, and drawing inferences.

ML systems don’t deduce the truth as humans do; rather they forecast the
truth based on available data. As analyst Nick Heath writes, “At a very
high level, machine learning is the process of teaching a computer system
how to make accurate predictions when fed data. Those predictions could

  • “Answering whether a piece of fruit in a photo is a banana or an
  • “Spotting people crossing the road in front of a self-driving car,
  • “[Determining] whether the use of the word book in a sentence relates
    to a paperback or a hotel reservation,
  • “[Deciding] whether an e-mail is spam, or
  • “Recognizing speech accurately enough to generate captions for a
    YouTube video.”3

The Promise of AI

Dr. Jim Hendler, director of the Rensselaer Institute for Data
Exploration and Application (IDEA), contends that the scale of information
growth – and the pace of information change – make it impossible for
humans to handle big data without intelligent computers. “The natural
convergence of AI and big data is a crucial emerging technology
space. Increasingly, big businesses will need the AI technology to
overcome the challenges or handle the speed with which information is
changing in the current business environment.”4

For another perspective, analysts Erik Brynjolfsson and Andrew McAfee
assert that artificial intelligence is “the most important general-purpose
technology of our era,” as potentially influential as the invention of the
internal combustion engine, which “gave rise to cars, trucks, airplanes,
chain saws, and lawnmowers, along with big-box retailers, shopping
centers, cross-docking warehouses, new supply chains, and, when you think
about it, suburbs. Companies as diverse as Walmart, UPS, and Uber found
ways to leverage the technology to create profitable new business models.”5

The Peril of AI

Much like genetic engineering, many of the ways in which artificial
intelligence will be used – and potentially misused – are presently
unknown, which is an unsettling proposition for some individuals,
particularly government officials, who may have to harness future AI
development. Some prominent people, including the late physicist and
cosmologist Stephen Hawking and Elon Musk (the futurist behind PayPal,
Tesla, and SpaceX) contend that AI presents an existential threat – a bits
and bytes version of the asteroid that killed the dinosaurs 65 million
years ago.

Intelligence? – Well, Not Really

It should be observed that the term artificial intelligence is
misleading, since to many people intelligence connotes consciousness, or
awareness of one’s surroundings. An AI program is not alive (as biological
beings are), and it is not conscious. A chess-playing AI program, for
example, does not know what chess is. As with all AI applications, the
program is ingesting massive amounts of information, making millions of
rules-based calculations, forming predictions (in this case, about its
opponent’s strategy), and selecting moves designed to counter that
strategy, i.e., to win the game. None of this is performed by the AI
program with any appreciation of the circumstances of the game, or the
nature or significance of its programmed achievements.


[return to top of this

“The main opportunities of artificial
intelligence lie in its ability to:
“Reveal better ways of doing things through
advanced probabilistic analysis of outcomes.
“Interact directly with systems that take
actions, enabling the removal of human-intensive calculations and
integration steps.”
– Gartner6

AI History

While interest in artificial intelligence can be traced back to Homer who
wrote of mechanical “tripods” waiting on the gods at dinner, only in the
last half century have computer systems been fast enough – and programming
languages sophisticated enough – to allow actual AI development.7

Among the AI highlights:

In 1950, Alan Turing authored a seminal paper
that promoted the possibility of programming a computer to behave
intelligently, including the description of a landmark imitation game we
now know as “Turing’s Test.”8

In 1955, the term “artificial intelligence”
was coined by John McCarthy, a math professor at Dartmouth

In the 1960s, major academic laboratories were
formed at the Massachusetts Institute of Technology (MIT) and Carnegie
Mellon University (CMU) [then Carnegie Tech working with the Rand
Corporation]. At the same time, the Association for Computing
Machinery’s Special Interest Group on Artificial Intelligence (ACM SIGART)
established a forum for people in disparate disciplines to share ideas
about AI.9

The 1970s saw the development of “expert
systems” – computer systems that simulated human decision-making through a
strictly prescribed rule set.

In a 1997 man-machine chess match, IBM’s “Deep
Blue” program defeated world champion Gary Kasparov, a turning-point in
the evolution of AI, at least from the public’s perspective.

In 2011, in a double-down event for the AI
industry, IBM’s Watson defeated a group of human champions in the game
show “Jeopardy!”.

In 2011, Apple released Siri, a virtual
assistant that uses a natural language interface to answer questions and
perform tasks for its human owner.10

In 2012, Google researchers Jeff Dean and
Andrew Ng trained a neural network of 16,000 processors to recognize
images of cats by showing it 10 million unlabeled cat images from YouTube

From 2015 to 2017, Google Deepmind’s AlphaGo,
a computer program that plays the board game Go, defeated a number of
human champions.12

In 2017, the Facebook Artificial Intelligence
Research lab trained two chatbots to communicate with each other in order
to learn negotiating skills. Remarkably, during that process the two bots
invented their own language.13

In 2020, Baidu released the LinearFold AI
algorithm to medical teams developing a COVID-19 vaccine. “The algorithm
can predict the RNA sequence of the virus in only 27 seconds, which is 120
times faster than other methods.”14

AI Applications

AI applications are, of course, many and varied. Some of the more
interesting and broadly representative AI use cases include:

24X7 “Legal Aides” – The
legal profession is utilizing AI programs to search discovery documents –
personnel-intensive work usually performed by firm associates or

Earthquake Prediction
Some scientists believe that AI analysis of seismic data will improve our
understanding of earthquakes and provide accurate early warnings. As
reported by Thomas Fuller and Cade Metz of The New York Times,
Paul Johnson, a fellow at the Los Alamos National Laboratory, is “actually
hopeful for the first time in [his] career that we will make progress on
this problem.”15

Creation of Real Art – Jimmy
Im of CNBC reports that an AI-generated portrait, “Edmond de Belamy, from
La Famille de Belamy,” sold for $432,000. According to Pierre Fautrel,
co-founder of Obvious, the art collective that created the work, “We gave
the algorithm 15,000 portraits and the algorithm understands what the
rules of the portraits are, and creates another one.”16

Mass customization – Analyst
Ron Schmelzer reports that “AI is making mass-customization possible for
many different industries by leveraging the power of machine
learning-enabled hyperpersonalization to tailor their offerings
and build solutions customized for each individual customer, patient,
client, or citizen.”17

Agriculture – Analyst
Zulaikha Geer warns that “the world will need to produce 50 percent more
food by 2050 because we’re literally eating up everything! The only way
this can be possible is if we use our resources more carefully. With that
being said, AI can help farmers get more from the land while using
resources more sustainably.”18

Cosmology – As one example,
astronomers used AI to process years of data obtained by the Kepler
telescope, which allowed them to identify a distant eight-planet solar

Unsupervised Learning

AI systems typically learn from datasets curated by humans – a mostly
manual process that can be extremely expensive. While this form of
“supervised” learning has produced breakthroughs ranging from voice
assistants to autonomous vehicles, researchers are pursuing a new
approach, “unsupervised” learning, that removes any reliance on human aid.

Analyst Rob Toews reports that “Many AI leaders see unsupervised learning
as the next great frontier in artificial intelligence. [T]he system learns
about some parts of the world based on other parts of the world. By
observing the behavior of, patterns among, and relationships between
entities – for example, words in a text or people in a video – the system
bootstraps an overall understanding of its environment. Some researchers
sum this up with the phrase ‘predicting everything from everything else.’

“Unsupervised learning more closely mirrors the way that humans learn
about the world: through open-ended exploration and inference, without a
need for the ‘training wheels’ of supervised learning.”20

Current View

[return to top of this

AI and Jobs

A Pew Research report, “AI, Robotics, and the Future of Jobs” published
in August 2014, explored the impact of AI and robotics on job creation and
retention. Based on the responses of nearly 1,900 experts, the report
found reasons to be hopeful and reasons to be concerned. The “concerning”
aspects of AI, still largely relevant today, are as follows:

  • “Impacts from automation have thus far impacted mostly blue-collar
    employment; the coming wave of innovation threatens to upend
    white-collar work as well.
  • “Certain highly-skilled workers will succeed wildly in this new
    environment – but far more may be displaced into lower paying service
    industry jobs at best, or permanent unemployment at worst.
  • Our educational system is not adequately preparing us for
    work of the future, and our political and economic institutions are
    poorly equipped to handle these hard choices.

Infrastructure Demands

According to a white paper published by Continental Resources, AI places
a heavy demand on enterprise infrastructure. “Adopting artificial
intelligence … technologies is the top initiative for [a substantial
number of] IT leaders. Many of them are focused on investing in solutions
like machine learning (ML) and deep learning (DL), two AI techniques that
require massive stores of data and are extremely compute intensive. This
rapid, impending adoption presents server and storage infrastructure with
never-before-seen performance demands and leads many decision makers to
ask, ‘Is today’s infrastructure enough?'”22

AI for All

While large private sector companies and public sector agencies are
investing in artificial intelligence, smaller organizations with fewer
resources are not presently enjoying the AI advantage. As analyst Sri
Krishna observes, “When it comes to artificial intelligence … adoption,
there is a growing gap between the haves and the have-nots.”23

In an effort to bridge this chasm, many small-to-medium-sized enterprises
(SMEs) are embracing artificial intelligence as a service (AIaaS).
According to NASSCOM, the global AIaaS market is expected to exceed $41
billion by 2025, nearly five times the size of today’s market. Helping
accelerate the growth, “AIaaS is often built on big cloud providers
including  IBM, SAP SE, Google, AWS, Salesforce, Intel and Baidu.”24

Edge AI

In the view of many experts, the future of artificial intelligence – and,
indeed, edge computing – is “edge AI,” in which machine learning
algorithms process data generated by edge devices locally. To borrow a
communications concept, local processing removes the latency inherent in
remote processing, in which data is collected by an edge device,
transmitted to another device or cloud where it is subsequently processed;
the processed data is then returned to the originating device, where the
data is used to facilitate an action.

In edge AI, this data diversion from and to the edge device is eliminated
by utilizing AI algorithms incorporated within the edge device or the edge
system to process the data directly. This removes the “middleman” element.

By combining data collection with smart data analysis and smart data
action, edge AI:

  • Expedites critical operations, especially where speed is essential
    (autonomous vehicle operation is a classic example); and
  • Eliminates multiple points of failure, since everything occurs
    locally, or at the edge.

In many respects, edge AI is the technology (or class of technologies)
that will ultimately enable enterprises to exploit the new class of
intelligent resources represented by the Internet of Things.

Natural Language Generation

One of the transformative achievements of artificial intelligence
research, natural language generation (NLG), also known as automated
narrative generation (ANG), is a technology that converts enterprise data
into narrative reports by recognizing and extracting key insights
contained within the data, and translating those findings into plain
English (or another language). There are articles, for example, prepared
by respected news outlets like the Associated Press that are actually
“penned” by computers. While perhaps overly optimistic, some analysts have
contended that as much as 90 percent of news could be algorithmically
generated by the mid-2020s, much of it without human intervention.

Natural language generation is commonly used to facilitate the following:

  • Narrative Generation – NLG can convert charts, graphs, and
    spreadsheets into clear, concise text.
  • Chatbot Communications – NLG can craft context-specific responses to
    user queries.
  • Industrial Support – NLG can give voice to Internet of Things (IoT)
    sensors, improving equipment performance and maintenance.
  • Language Translation – NLG can transform one natural language into
  • Sentiment Analysis – First, NLU determines which of several languages
    resonates with users; second, NLG delivers messages in the users’
    preferred languages.25
  • Speech Transcription – First, speech recognition is employed to
    understand an audio feed; second, NLG turns the speech into text.
  • Content Customization – NLG can create marketing and other
    communications tailored to a specific group or even individual.
  • Robot Journalism – NLG can write routine news stories, like sporting
    event wrap-ups, or financial earnings summaries.


[return to top of this

The AI Market

According to a recent analysis by market research firm Nova One Advisor,
the global artificial intelligence market, valued at $93.8 billion is
2021, will reach a remarkable $1,811.9 billion by 2030, reflecting a
compound annual growth rate (CAGR) of 38.9 percent during the 2022-2030
forecast period.

Leading market sectors include advertising and media; healthcare; and
banking, financial services, and insurance (BFSI).26

The Internet of Things

The age of the Internet of Things (IoT), in which every machine or device
is “IP addressable” and, therefore, capable of being connected with
another machine or device, is rapidly approaching. Since the IoT will
produce big data, often billions of data elements, AI will be needed to
“boil” that data down into meaningful and actionable intelligence.

As analyst Brian Buntz reports, “according to Richard Soley, executive
director of the Industrial Internet Consortium, “Anything that’s
generating large amounts of data is going to use AI because that’s the
only way that you can possibly do it.”27

Analyst Iman Ghosh observes that the union of AI and IoT, or AIoT, is
already affecting four major technology segments:

  • Wearables, which “continuously monitor and track user
    preferences and habits,” particularly impactful in the healthcare
  • Smart Homes, which “[learn] a homeowner’s habits and
    [develop] automated support.”
  • Smart Cities, where “the practical applications of AI
    in traffic control are already becoming clear.”
  • Smart Industry, where “from real-time data analytics
    to supply-chain sensors, smart devices help prevent costly errors.”28

Gartner on AI

Forecasting the future, Gartner analysts predict that by 2025:

  • “Fifty (50) percent of enterprises will have devised AI orchestration
    platforms to operationalize AI, up from fewer than 10 percent in 2020.
  • “AI will be the top category driving infrastructure decisions, due to
    the maturation of the AI market, resulting in a tenfold growth in
    computing requirements.
  • “Ten (10) percent of governments will use a synthetic population with
    realistic behavior patterns to train AI, while avoiding privacy and
    security concerns.”29

AI As a Threat

While the proponents of AI have a positive tale to tell, they must also
acknowledge that AI arrives with certain risks.

As analyst Nick Bilton ponders, “Maybe a rogue computer momentarily
derails the stock market, causing billions in damage. Or a driverless
car freezes on the highway because a software update goes awry.”30

The American Civil Liberties Union (ACLU) has expressed concern about
AI’s impact on the privacy of public spaces. “Increasingly, the need
for analysis [by law enforcement] will impel the deployment of artificial
intelligence techniques to sift through all the data and make judgments
about where security resources should be deployed, who should be subject
to further scrutiny, etc.”31

AI As an Existential Threat

Even if artificial intelligence systems never achieve a sentient or
self-aware state, many well-respected observers believe they pose an
existential threat.

Nick Bostrum, author of the book Superintelligence, suggests
that while self-replicating nanobots (or microscopic robots) could be
arrayed to fight disease or consume dangerous radioactive material, a
“person of malicious intent in possession of this technology might cause
the extinction of intelligent life on Earth.”32

The critics, including Elon Musk and the late Stephen Hawking, allege two
main problems with artificial intelligence:

  • First, we are starting to create machines that think like humans but
    have no morality to guide their actions.
  • Second, in the future, these intelligent machines will be able to
    procreate, producing even smarter machines, a process often referred to
    “superintelligence”. Colonies of smart machines could grow at an
    exponential rate – a phenomenon for which mere people could not erect
    sufficient safeguards.33

“We humans steer the future not because we’re the strongest beings on
the planet, or the fastest, but because we are the smartest,” said James
Barrat, author of Our Final Invention: Artificial Intelligence and
the End of the Human Era
. “So when there is something smarter
than us on the planet, it will rule over us on the planet.”34

The Master Algorithm

If artificial intelligence initiatives are aimed at creating machines
that think like human beings – only better – then the ultimate goal of AI
is to produce what analyst Simon Worrall calls the “Master Algorithm. “The
Master Algorithm is an algorithm that can learn anything from
data. Give it data about planetary motions and inclined planes, and
it discovers Newton’s law of gravity. Give it DNA crystallography
data and it discovers the Double Helix. Give it a vast database of
cancer patient records and it learns to diagnose and cure cancer. My gut
feeling is that it will happen in our lifetime.”35


[return to top of this

Expect the Unexpected

Predicting the future is always problematic, but particularly so when the
object of the predictions has seemingly unlimited potential, as with
artificial intelligence. Analyst Ron Schmelzer previews the dilemma by
reminding us how little we knew – or imagined – about the possibilities
surrounding portable phones when they were introduced. “In the 1980s the
emergence of portable phones made it pretty obvious that they would allow
us to make phone calls wherever we are, but who could have predicted the
use of mobile phones as portable computing gadgets with apps, access to
worldwide information, cameras, GPS, and the wide range of things we now
take for granted as mobile, ubiquitous computing. Likewise, the
future world of AI will most likely have much greater impact in a much
different way than what we might be assuming today.”36

Devise an AI Plan

Analyst Kristian Hammond offers a variety of suggestions for managing
artificial intelligence in the enterprise. These include:

  • Remember that your goal is to solve real business problems,
    not simply to put in place an ‘AI strategy’ or ‘cognitive computing’
    work plan. Your starting point should be focused on business goals
    related to core functional tasks.
  • Focus on your data. You may very well have the data
    to support inference or prediction, but no system can think beyond the
    data that you give it.
  • Know how your systems work, at least at the level of
    the core intuition behind a system.
  • Understand how a system will fit into your workflow and who
    will use it
    . Determine who will configure it and who will
    work with the output.
  • Remain mindful that you are entering into a human computer
    . AI systems are not perfect and you need to be
    flexible. You also need to be able to question your system’s answers.”37

AI pioneer Andrew Ng advises enterprise officials to keep it simple. “The
only thing better than a huge long-term [AI] opportunity is a huge
short-term opportunity, and we have a lot of those now.”38

Invest in Narrow AI

In assessing the present state of artificial intelligence, President
Obama’s National Science and Technology Council (NSTC) Committee on
Technology drew a sharp distinction between so-called “Narrow AI” and
“General AI.”

Remarkable progress has been made on Narrow AI, which addresses specific
application areas such as playing strategic games, language translation,
self-driving vehicles, and image recognition. Narrow AI underpins many
commercial services such as trip planning, shopper recommendation systems,
and ad targeting, and is finding important applications in medical
diagnosis, education, and scientific research.

General AI (sometimes called Artificial General Intelligence, or AGI)
refers to a future AI system that exhibits apparently intelligent behavior
at least as advanced as a person across the full range of cognitive tasks.
A broad chasm seems to separate today’s Narrow AI from the much more
difficult challenge of General AI. Attempts to reach General AI by
expanding Narrow AI solutions have made little headway over many decades
of research. The current consensus of the private-sector expert community,
with which the NSTC Committee on Technology concurs, is that General AI
will not be achieved for at least decades.39

From an enterprise perspective, continuing to invest in Narrow AI – over
the much more speculative General AI – seems the prudent course.

Protect Human Resources

Advances in information technology and robotics are transforming the
workplace. Productivity gains are enabling enterprises to perform
more work with fewer workers. Artificial intelligence programs –
including AI-enhanced robots – will only accelerate this trend. The
enterprise Human Resources department should develop a strategy for
retraining – or otherwise assisting – workers displaced due to AI.

Beware of Legal Liabilities

Where there is new technology, there is litigation. Legal Tech News
emphasizes “the importance of not waiting until an incident occurs to
address AI risks. When incidents do occur, for example, it’s not simply
the incidents that regulators or plaintiffs scrutinize, it’s the entire
system in which the incident took place. That means that reasonable
practices for security, privacy, auditing, documentation, testing and more
all have key roles to play in mitigating the dangers of AI. Once the
incident occurs, it’s frequently too late to avoid the most serious

Insist on Explainable AI

Any process, whether digital or physical, whether performed by a human or
artificial intelligence, must be trusted. AI applications often rely on
derived logic, which may be difficult for humans to discern and
understand. To gain essential trust in AI operations, the operations must
be explainable. To that end, the US National Institute of Standards and
Technology (NIST) has developed four principles of explainable artificial
intelligence – principle to which AI systems should adhere. 

Principle 1. Explanation:
Systems deliver accompanying evidence or reason(s) for all outputs.

The Explanation principle obligates AI systems
to supply evidence, support, or reasoning for each output.

Principle 2. Meaningful:
Systems provide explanations that are understandable to individual users.

A system fulfills the Meaningful principle if
the recipient understands the system’s explanations.

Principle 3. Explanation Accuracy:
The explanation correctly reflects the system’s process for generating the

Together, the Explanation and Meaningful
principles only call for a system to produce explanations that are
meaningful to a user community. These two principles do not require that a
system delivers an explanation that correctly reflects a system’s process
for generating its output. The Explanation Accuracy principle imposes
accuracy on a system’s explanations.

Principle 4. Knowledge Limits:
The system only operates under conditions for which it was designed, or
when the system reaches a sufficient confidence in its output.

The previous principles implicitly assume that
a system is operating within its knowledge limits. This Knowledge Limits
principle states that systems identify cases they were not designed or
approved to operate, or their answers are not reliable. By identifying and
declaring knowledge limits, this practice safeguards answers so that a
judgment is not provided when it may be inappropriate to do so. The
Knowledge Limits Principle can increase trust in a system by preventing
misleading, dangerous, or unjust decisions or outputs.41

Don’t Obsess Over Nightmare Scenarios

President Obama’s National Science and Technology Council (NSTC)
Committee on Technology has concluded that the long-term concerns about
super-intelligent General AI should have little impact on current AI
policy and practice. “The policies the Federal Government should adopt in
the near-to-medium term if these fears are justified are almost exactly
the same policies the Federal Government should adopt if they are not
justified. The best way to build capacity for addressing the longer-term
speculative risks is to attack the less extreme risks already seen today,
such as current security, privacy, and safety risks, while investing in
research on longer-term capabilities and how their challenges might be


[return to top of this

About the Author

[return to top of this

James G. Barr is a leading business continuity analyst
and business writer with more than 30 years’ IT experience. A member of
“Who’s Who in Finance and Industry,” Mr. Barr has designed, developed, and
deployed business continuity plans for a number of Fortune 500 firms. He
is the author of several books, including How to Succeed in Business
BY Really Trying
, a member of Faulkner’s Advisory Panel, and a
senior editor for Faulkner’s Security Management Practices.
Mr. Barr can be reached via e-mail at

[return to top of this