Stories and Matters
2018
31
30
Technology
“I have the feeling that the world outside
is calling me. Whispering that there is
something more”, confides Dolores Aber-
nathy, the caring, archetypal rancher’s
daughter in the American Wild west of the
19th century. Dolores is also a 3D printed
android and equipped with artificial intelli-
gence. She and her colleagues have become
the “unwitting” attractions of Westworld1,
an amusement park where nothing is
off-limits in keeping with the spirit of an
authentic western experience. “We are
already experiencing a situation of artificial
intelligence” reflects Lisa Joy, co-creator
of the television series Westworld (2016)
“it is just that we don’t see robots, we see
smartphones. We think ‘well, that is a
small step’. But if we look at it as a whole,
we realize we are moving towards a world
in which our lives and our thoughts are
loaded onto the web. We are already living
and experiencing artificial intelligence”.
Artificial Intelligence (AI) and artificial
consciousness (AC), additive manufactur-
ing (AM), the internet of things (IoT) and
the internet of everything (IoE), are just
some of the technological frontiers that the
television series enacts, combining them in
a present time frame. The technologies of
the original film, annus domini 1973, seem
to belong to another era. It was written and
directed by Michael Crichton and showed
humanoid robots, similar to D-3BO in Star
Wars, in 1977 or to Terminator, in 1984,
in the near future (1983).
S
cience fiction came about as a literary
genre in England in the 19th century,
where the technological development that
was driving the rampant industrial revolu-
tion triggered a series of profound social
transformations, arousing feelings of both
hope and fear, a rich imagination was brought
to life, focussing on the possible outcomes
of scientific discoveries and technological
applications. In the first science fiction
novel, written by the eighteen year old Mary
Shelley in 1816, the main character is a
young medical student who, traumatized
by his mother’s death, conceives the idea
of creating the perfect, strong and untouch-
able being. Frankenstein creates his
creature by sewing together different parts
of corpses and brings him to life with
electricity. At that time there was widespread
trafficking of corpses fuelled by medical
schools and their need to practice dissection
and at the same time groundbreaking
studies on electric current (Hans Christian
Ørsted, 1820 Michael Faraday, 1821
André-Marie Ampère, 1826 Georg Ohm,
1827) were laying the groundwork for
the invention of electric motors and the
second industrial revolution.
T
he novel, Jurassic Park, written by
Michael Crichton in 1990, imagined
that the development of genetic engineer-
ing would have allowed to bring prehistor-
ic giants back to life using the fossilized
DNA of dinosaurs. On the 30th of July 2003
a team of French and Spanish scientists
succeeded in bringing the bucardo (a recently
extinct Spanish mountain wild goat) back
to life for just a few minutes. In 2012, in
San Francisco, the project Revive&Restore
was launched with the aim to rescue extinct
animals by implanting their embryos in the
most genetically suitable species.
In Jurassic Park the genetic code extracted
from mosquitoes was read by super
computers, able to reduce the length of a
two-year operation down to just a few
minutes. When the novel was written the
Human Genome Project2 was in its early
days. It required a 2,7 billion dollar invest-
ment and was declared complete in 2003.
By 2015 the human genome could be
sequenced in a few hours at the cost of 1,000
dollars. We are therefore witnessing a
short-circuit: the time frontiers imagined
by science fiction are broken down and
the periods of time are becoming increas-
ingly short.
The appearance of the communicator in
Star Trek in 1966 inspired the invention of
the mobile phone. Martin Cooper, head
of a research team for Motorola, developed
a prototype in 90 days that was then
presented to the press in New York, on the
3rd of April in 1973. The first mobile
phone was marketed in 1983, at $3,995
($9,300 in today’s prices). Technology
fuels both hope and concern, it is the engine
for human creativity, which in turn
accompanies and traces the future: generat-
ing a short-circuit. The boundaries between
the present and the future, between imagina-
tion and reality, are worn thin: it is in fact
science fiction itself that inspires and
drives the technological evolution. Speed is
undoubtedly key: first of all in the develop-
ment of technologies but also in their
diffusion and use in different sectors and
geographical areas. Added to the fact that
the web already connects every corner of
the earth and makes the sharing of any
invention even more rampant: because
they can spread in practically almost
no time at all and there is an increasing
number of people who, getting to know
about them, can help to enhance and
improve them.
O
n the other hand, emerging technolo-
gies have a high improvement rate
after the prototype phase and when they
start to be applied. Before James Watt
came about, steam engines harnessed
approximately 1% of the energy released
by steam combustion. Between 1765
and 1776 Watt enhanced this performance
three-fold. From the first 4,4 kW model,
Watt went on to build a 7,5 kW model in
1781. In 1876 a 1.000 kW steam engine
was produced, in 1900 a 2.200 kW one.
Gradually, however, as technology reaches
a certain maturity, increasingly intense efforts
can lead to increasingly modest results:
the physical levels of improvement have
been reached. The difference is given by
the rhythm and rate of improvement and
by its duration. The historian Ian Morris
wrote: “even if the (steam) revolution took
several decades to develop, it was, in any
case, the biggest and fastest transformation
in the entire history of the world”.
S
peed and duration are relative concepts:
information technology has taken us to
a new dimension. In 1965 Gordon Moore3,
the co-founder of Intel, observed that the
number of transistors contained within an
integrated circuit increased two-fold every
year: there were only ten components in
the first chip that was assembled in 1958.
He ventured that in the short-term their
rate of growth would have been steady.
Moore’s Law proved to be accurate for
over 50 years: the number of transisters in
a chip doubled every 18 months. In June
2017 IBM, Samsung and Globalfoundries
announced a new industrial process that
would allow for the development of chips
containing 30 billion transistors. The
speed and energy efficiency of the super-
computers, the speed of downloading,
the efficiency of the hard drive, and other
numerous innovations in the digital
and information technology fields reflect
Moore’s Law. In 1996 the American
government developed ACSI Red: it cost
55 million dollars, took up a surface area
of 200 square meters and consumed 800 kW
an hour. It was the first supercomputer
to exceed 1.8 teraflops in speed. In 2006,
however, a new computer was built able
to perform at the same speed: it cost 500
dollars, took up much less than a square
meter in space and used 200 W an hour.
It was the play station 3.
G
rowth takes place at an unprecedented
exponential rate and prolonged over
time puts us in front of things, that in terms
of size and magnitude go beyond our
ability of understanding. Terms of scale
we are actually “not equipped for”.
S
ustainability of exponential growth
over time sure enough takes on a whole
new meaning. When it comes to steam
engines, airplane speed, the production of
electricity, the weight of automobiles,
the physical limits are tangible. The digital
world and information technology present
us instead with a different set of problems.
The limitations are much more relative:
“They concern the number of electrons per
second that can be made to pass through
a channel in an integrated circuit, or how
fast the rays of light can pass through a
fiber optic cable”. Brynjolfsson and McAfee,
researchers at the MIT and the authors of
The Second Machine Age4 (2014), observe
that the exponential speed leads us to ideas
of such magnitude that they seem abstract,
or rather “in an era in which what arrived
first is no longer a particularly reliable guide
for what will follow: science fiction contin-
ues to become reality”. Human beings are
not the only ones who exchange informa-
tion: machines “chat” increasingly more.
The M2M devices (machine to machine),
they literally communicate between devices,
and not by users, using any communication
channel represented 34% of all internet
connected devices in 2016. The remaining
66% was made up of personal computers,
tablets, desk tops, televisions and smart-
phones. According to CISCO, they forecast
that, by 2021, the number of M2M
devices, including cars, industrial robots,
medical equipment and fitness sensors
will, at 51%, overtake the traditional online
devices. The neologism Internet of
Things was used for the first time in 1999
to describe objects able to the react to their
surroundings, collecting and transmitting
data, drawing on and using information.
The objects communicate between
themselves and with their surroundings
by means of chips and sensors. Today, the
physical world can be (almost) entirely
digitalized, and this itself is one the most
important innovations to have taken place
over recent years.
D
igitalizing means transforming a
physical phenomenon into the
language of computers, in a sequence of
numbers expressed in a binary format,
or in other words into information that can
be archived, modified and re-used.
The economic implication is huge: digital
information has a marginal cost of
reproduction that is next to nothing, and
it doesn’t run out when used, indeed its
value increases with the increase of users
who use it. Data is produced, in real time
and on a large scale, from sensors, audio
and video devices, networks, transactional
applications, log files, internet and social
media. Ninety percent of the data that was
available in 2013 had been created in the
two years leading up to it. This data, after
all, continues to increase, not only in volume
but also in type and in speed. The need to
analyse and process tonnes of heterogeneous
data in increasingly shorter time scales
is driving the development of analysis tech-
niques that go under the name of Big Data5.
P
otentially such information can be
processed and used in real time in order
to make choices and take decisions, but
the ever increasing volumes and speed with
which this data is produces mean that
new storage technologies are required (like
blockchain) and the development of
technologies that make it possible to take
full advantage of the computing power
of the machines to perform ever increasing-
ly complex operations. In The new
division of labour, written in 2014, the
authors Frank Levy and Richard Murnane
predicted a labour market in which the
professional skills required would be found
only within the limits of computers and
information technology. The former are, in
fact, able to perform all sorts of symbolic
operations, from mathematics to logic,
through to language, and therefore can
already replace any human activity that
can be described using algorithms.
The exclusively human ability to examine
the information and to recognize models
or patterns would be preserved.
The example that is usually given is that
of driving in traffic: “A truck driver has a
windscreen and so is able to recognise
what is ahead. Articulating this knowledge
and inserting it into an IT program for
every situation [...] is an extremely difficult
task at the moment. Computers cannot
substitute human beings so easily”.
In 2004 the Grand Challenge failed.
T
his was a challenge open to completely
autonomous vehicles, sponsored by
the DARPA, the US Defense Advanced
Research projects Agency. The aim was
to complete, in the shortest time possible,
a 200 kilometer track in the Californian
desert. The car that went the furthest covered
just 5% of the track and then went off the
road on a tight bend. Shortly afterwards, on
the 9th of October 2010, Google announced
its success on 140,000 miles of tarmac:
“our automated cars use video cameras,
radar sensors and a telemeter radar in
order to “see” traffic, just as our detailed
maps do. All of this is made possible by
our data processing centres that process
huge amounts of data from our cars while
they are mapping the terrain”.
T
he conversion of data, that is miscella-
neous in origin and content, into a
standard language, the binary one, means
that information from different fields,
can, at a surprisingly fast rate, be connected.
Digital technologies coupled with the
exponential rate of improvement in informa-
tion technologies allow for the simultane-
ous development and recombination of
innovations in different sectors: new
materials, additive manufacturing, DNA
sequencing, nanotechnology, renewable
energies, advanced robotics and quantum
information technology. According
to Klaus Schwab, founder of the World
Economic Forum, “the combination
of these new technologies and their interac-
tion through physical, digital and biologi-
cal domains make the industrial revolution
different to the previous ones”.
First there was the steam and the mechani-
zation of work carried out by humans
or animals, then, there was electricity, the
assembly line process and mass produc-
tion. The third industrial era came later
with the advent of computers and automa-
tion, when robots began replacing human
beings in the assembly line. Now we are
entering into the fourth industrial revolution,
in which computers and automation will
blend innovatively together, where robots
will be controlled through systems of
Artificial Intelligence6, able to assimilate
and operate without any human intervention.
T
he biggest limit to the full deployment
of technology’s potential is actually
the speed with which we change our habits,
intuitively and systematically exploit the
potential of innovations and conceive new
ways of designing processes, organize
work and combine information.
For example, only 0.5% of the available
data is used today in decisional processes.
It should, after all, be remembered that the
transition from steam engines to electric
motors didn’t bring about an immediate
improvement in production. The historian
Paul David noted that technicians and
managers at that time limited themselves
to replacing technologies, without chang-
ing, for example, the layout and the
organisation of the factories. It was only
the next generation who were able to fully
exploit the potential of electric engines.
Steam required only one source of energy
and machinery that required greater power
was positioned closer to the energy source.
On the other hand, with electricity, every
machine can be powered by a single
engine, therefore, the layout of factories
and manufacturing plants started to be
designed on the basis of work flows and
materials.
G
etting back to the present, on a global
scale the number of projects that
favour and drive the breakthrough that will
allow us to exploit the potential of new
technologies has grown. The US Advanced
Manufacturing Partnership, the Industrie
4.07 a project adopted by the German
government, the strategic plan Made in China
2025 are examples of national strategies
aimed at stimulating and directing the
application of technological innovations,
to determine the outcomes of the fourth
industrial revolution. But the range of
transformation will be much greater and
will go well beyond the industrial scope.
The companies based in Silicon Valley
including Uber, Airbnb, Linkedin,
Facebook, Amazon, Google, Netflix, Twitter
have already created a “break” with the
past by changing, potentially everywhere
and for good, our way of travelling, of
moving, of buying, of looking for jobs, of
communicating, of using multimedia
content and so on.
I
n March 2017, at the CeBIT in Han-
nover – the most important IT trade fair
– Japan presented its government pro-
gramme Society 5.0 with the intention to
lead the transformation of social structures
that will accompany the new revolution of
machines. Even if we are not yet able to
accurately outline the results of this transfor-
mation, to use the words of McAfee and
Brynjolfsson, it will surely “be character-
ized by numerous examples of intelligence
of machines and by billions of brains that
will be interconnected and work together”.
The outcome will depend on our ability to
imagine and build the future-present that
awaits us.
Exponential innovation:
imagining the present.
Curated by Rimadesio
1. Westworld is a science fi ction television series.
Season 1 was shown in the USA in 2016 by HBO.
2. Human Genome Project was an international re-
search program whose goal was the complete mapping
and understanding of all the genes of human beings.
3. “Integrated circuits will lead to such wonders
as home computer – or at least terminals connected
to a central computers – automatic controls for
automobiles, and personal portable communications
equipment. The electronic wrist-watch needs only
a display to be feasible today.”
Gordon E. Moore, Cramming more components,
Electronics, Volume 38, Number 8, April 19, 1965.
4. Erik Brynjolfsson, Andrew McAfee, The Second
Machine Age: Work, Progress, and Prosperity in
a Time of Brilliant Technologies, W. W. Norton & Co
Inc, New York, 2014.
5. Data that is unstructured or time sensitive or
simply very large cannot be processed by relational
database engines. This type of data requires
a different processing approach called Big Data.
6. Artifi cial intelligence is an area of computer
science that emphasizes the creation of intelligent
machines that work and react like humans.
7. Zukunftsprojekt Industrie 4.0 (I40) is a national
strategic initiative from the German government.
It is pursued over a 10-15-year period and is based on
the German government's High Tech 2020 Strategy.
The initiative was launched in 2011, allocating € 200
million in funding.