“Traditions and ideas must be revisited and reworked, communicated and debated, entangled and disentangled. (Self)-critique can be carried out neither in narcissistic isolation nor in the silence of the ineffable. In the gap between acknowledging your echoing and refusing to echo, and the gap between one’s own pure voice and its simulacrum, critical educational theory of all persuasions struggles with words. Perhaps it is more critical when its loving words are addressed to others and when it harkens to their response, though in this case too, the teacher-pupil relation is one of articulation. For, to echo Derrida here, ‘a master who forbids himself the phrase would give nothing. He would have no disciples but only slaves’ (1995, p. 147).” —Papastephanou (2004)
Papastephanou, M. (2004). Educational Critique, Critical Thinking and the Critical Philosophical Traditions. Journal of Philosophy of Education, 38(3), 369–378. https://doi.org/10.1111/j.0309-8249.2004.00391.x
The 2024 Tsinghua Higher Education Forum 清华高等教育论坛 . Institute of Education, Tsinghua University 清华大学教育研究院. The Beijing Convention Center 北京会议中心. 30th August 2024,14:25 – 14:50 Prof. Holmes, Wayne: “AI and Education: A critical Studies Approach”
Derrida, J. (1995) Violence and Metaphysics, in: Writing and Difference (London, Routledge).
We naturally tend to depend on the severity of one’s symptoms, the fibroid size, number and location. viagra no prescription usa Dosage pattern: Sildenafil is easily accessible on our viagra without side effects site with 50mg, 100mg, and 200mg dosage quantity. So, when the blood rushes from your body to last longer in bed naturally. soft generic viagra All kinds of stress can be easily overcome by medications such as sildenafil pfizer or cialis generic.
Definitions are artificial meaning-giving constructs. A definition is a specific linguistic form with a specific function. Definitions are patterns of weighted attributes, handpicked by means of (wanted and unwanted) biases. A definition is then as a category of attributes referring to a given concept and which then, in turn, aims at triggering a meaning of that targeted concept.
Definitions are aimed at controlling such meaning-giving of what it could refer to and of what it can contain within its proverbial borders: the specified attributes, narrated into a set (i.e. a category), that makes up its construct as to how some concept is potentially understood.
The
preceding sentences could be seen as an attempt to a definition of the concept
“definition” with a hint of how some concepts in the field of AI itself are
defined (hint: have a look at the definitions of “Artificial Neural Networks”
or of “Machine Learning” or of “Supervised and Unsupervised Learning”). Let us
continue looking through this lens and expand on it.
Definitions
can be constructed in a number of ways. For instance: they can be constructed
by identifying or deciding on, and giving a description of, the main attributes
of a concept. This could be done, for instance, by analyzing and describing
forms and functions of the concept. Definitions could, for instance, be
constructed by means of giving examples of usage or application; by stating
what some concept is (e.g. synonyms, analogies) and is not (e.g. antonyms); by
referring to a historical or linguistic development (e.g. its etymology,
grammatical features, historical and cultural or other contexts, etc.); by
comparison with other concepts in terms of similarities and differentiators; by
describing how the concept is experienced and how not; by describing its needed
resources, its possible inputs, its possible outputs, intended aims (as a
forecast), actual outcome and larger impact (in retrospect). There are many
ways to construct a definition. So too is it with a definition for the concept of
“Artificial Intelligence”.
For a moment, as another playful side-note, by using our imagination and by trying to make the link between the process of defining and the usage of AI applications stronger: one could imagine that an AI solution is like a “definition machine.”
One could following imagine that this machine gives definition to a data set –by offering recognized patterns from within the data set– at its output. This AI application could be imagined as organizing data via some techniques. Moreover, the application can be imagined to be collecting data as if attributes of a resulting pattern. To the human receiver this in turn could then define and offer meaning to a selected data set . Note, it also provides meaning to the data that is not selected into the given pattern at the output. For instance: the date is labelled as “cat” not “dog” while also some data has been ignored (by filtering it out; e.g. the background “noise” of ‘cat’). Did this imagination exercise allow one to make up a definition of AI? Perhaps. What do you think? Does this definition satisfy your needs? Does it do justice to the entire field of AI from its “birth”, its diversification process along the way, to “now”? Most likely not.
A human designer of a definition likely agrees with the selected attributes (though not necessarily) while, those receiving the designed definition might agree that it offers a pattern but, not necessarily the meaning-giving pattern they would construct. Hence, definitions tend to be contested, fine-tuned, altered, up-dated, dismissed all-together over time and, depending on the perspective, used to review and qualify other yet similar definitions. It almost seems that some definitions have a life of their own while others are, understandably, safely guarded to be maintained over time.
When learning about something and when looking a bit deeper than a surface, one then quickly is presented with numerous definitions of what was thought to be one and the same thing yet, which show variation and diversity in a field of study. This is OK. We, as individuals within our species, are able to handle, or at least live with ambiguities, uncertainties and change. These, by the way, are also some of the reasons why, for instance and to some extent, the fields of Statistics, Data Science and AI (with presently the sub-field of Machine Learning and Deep Learning) exist.
The “biodiversity” of definitions can be managed in many ways. One can manage different ideas at the same time in one’s head. It is as one can think of black and white and a mix of the two, in various degrees and that done simultaneously; while also introducing a plethora of additional colors. This can still offer harmony in one’s thinking. If that doesn’t work, one can put more importance to one definition over another, depending on some parameters befitting the aim of the learning and the usage of the definition (i.e. one’s practical bias of that moment in spacetime). One can prefer to start simple, with a reduced model as offered in a modest definition while (willingly) ignoring a number of attributes. This one could remind oneself to do so by not equating this simplified model / definition with the larger complexities of that what it only initiates to define.
One can
apply a certain quality standard to allow the usage of one definition over
another. One could ask a number of questions to decide on a definition. For instance:
Can I still find out who made the definition? Was this definition made by an
academic expert or not, or is it unknown? Was it made a long time ago or not;
and is it still relevant to my aims? Is it defining the entire field or only a
small section? What is intended to be achieved with the definition? Do some people disagree with the definition;
why? Does this (part of the) definition aid me in understanding, thinking about
or building on the field of AI or does it rather give me a limiting view that
does not allow me to continue (a passion for) learning? Does the definition
help me initiate creativity, grow eagerness towards research, development and
innovation in or with the field of AI? Does this definition allow me to
understand one or other AI expert’s work better? If one’s answer is
satisfactory at that moment, then use the definition until proven inadequate.
When inadequate, reflect, adapt and move on.
With this approach in mind, the text here offers further 10 considerations and “definitions” on the concept of “Artificial Intelligence”. For sure, others and perhaps “better” ones can be identified or constructed.
#1 An AI Definition and its Issues. The problem with many definitions of Artificial Intelligence (AI) is that they are riddled with what is called “suitcase words”. They are “…terms that carry a whole bunch of different meanings that come along even if we intend only one of them. Using such terms increases the risk of misinterpretations…”.[2] This term, “suitcase words”, was created by a world-famous computer scientist, who is considered one of the leading figures in the developments of AI technologies and the field itself: Professor MINSKY, Marvin.
#2 The Absence of a Unified Definition. On the global stage or among all AI researchers combined, there is no official (unified) definition of what Artificial Intelligence is. It is perhaps better to state that the definition is continuously changing with every invention, discovery or innovation in the realm of Artificial Intelligence. It is also interesting to note that what was once seen as an application of AI is (by some) now no longer seen as such (and sometimes “simply” seen as statistics or as a computer program like any other). On the other end of the spectrum, there are those (mostly non-experts or those with narrowed commercial aims) who will identify almost any computerized process as an AI application.
#3 AI Definitions and its Attributes. Perhaps a large number of researchers might agree that an AI method or application has been defined as “AI” due to the combination of the following 3 attributes:
it is made by humans or it is the result of a technological process that was originally created by humans,
it has the ability to operate autonomously (without the support of an operator; it has ‘agency’[3]) and
it has the ability to adapt (behaviors) to, and improve within changing contexts (i.e. changes in the environment); and this by means of a kind of technological process that could be understood as a process of “learning”. Such “learning” can occur in a number of ways. One way is to “learn” by trial-and-error or a “rote learning” (e.g. the storing in memory of a solution to a problem). A more complex way of applying “learning” is by means of “Generalization”. This means the system can “come up” with a solution, by generalizing some mathematical rule or set of rules from given examples (i.e. data), to a problem that was previously not yet encountered. The latter would be more supportive towards being adaptable in changing and uncertain environments.
#4 AI Definitions by Example. Artificial Intelligence could, alternatively, also be defined by listing examples of its applications and methods. As such some might define AI by listing its methods (which are individual methods in the category of AI methods. Also see here below one of the listing of types and methods towards defining the AI framework): AI than, for instance, includes Machine Learning, Deep Learning and so on.
Others might define AI by means of its applications whereby AI is, for instance, a system that can “recognize”, locate or identify specific patterns or distinct objects in (extra-large, digital or digitized) data sets where such data sets could, for instance, be an image or a video of any objects (within a set), a set or string of (linguistic) sounds, be it prerecorded or in real-time, via a camera or other sensor. These objects could be a drawing, some handwriting, a bird sound, a photo of a butterfly, a person uttering a request, a vibration of a tectonic plate, and so on (note: the list is, literally, endless).
#5 AI Defined by referencing Human Thought. Other definitions define AI as a technology that can “think” as the average humans do (yet, perhaps, with far more processing power and speed)… These would be “…machines with minds, in the full and literal sense… [such] AI clearly aims at genuine intelligence, not a fake imitation.”[4] Such a definition creates AI research and developments driven by “observations and hypothesis about human behavior”; as it is done in the empirical sciences.[5]. At the moment of this writing, the practical execution of this definition has not yet been achieved.
#6 AI Defined by Referencing Human Actions. Further definitions of what AI is, do not necessarily focus on the aspect of ability of thought. Rather some definitions for AI focus on the act that can be performed by an AI technology. Then definitions are something like: an AI application is a technology that can act as the average humans can act or do things with perhaps far more power, strength, speed and without getting tired, bored, annoyed or hurt by features of the act or the context of the act (e.g. work inside a nuclear reactor). Rai Kurzweil, a famous futurist and inventor in technological areas such as AI, defined the field of AI as: “ The art of creating machines that perform functions that require intelligence when performed by people.”[6]
#7 Rational Thinking at the Core of AI Definitions. Different from the 5th definition is that thought does not necessarily have to be defined through a human lens or anthropocentrically. As humans we tend to anthropomorphize some of our technologies (i.e. give a human-like shape, function, process, etc. to a technology). Though, AI does not need to take on a human-like form, function nor process; unless we want it to. In effect, an AI solution does not need to take on any corporal / physical form at all. An AI solution is not a robot; it could be embedded into a robot.
One could define the study of AI as a study of “mental faculties through the use of computational models.”[7] Another manner of defining the field in this way is stating that it is the study of the “computations that make it possible to perceive, reason and act.”[8][9]
The idea of rational thought goes all the way back to Aristotle and his aim to formalize reasoning. This could be seen as a beginning of logic. This was adopted early on as one of the possible methods in AI research towards creating AI solutions. It is, however, difficult to implement. This is the case since not everything can be expressed in a formal logic notation and not everything is perfectly certain. Moreover, not all problems are practically solvable by logic principles, even if via such logic principles they might seem solved.[10]
#8 Rational Action at the Core of AI Definitions. A system is rational if “it does the ‘right thing’, given what it knows.” Here, a ‘rational’ approach is an approach driven by mathematics and engineering. As such “Computational Intelligence is the study of the design of intelligent agents…”[11]To have ‘agency’ means to have the autonomous ability and to be enabled to act / do / communicate with the aim to perform a (collective) task.[12] Scientists, with this focus in the field of AI, research “intelligent behavior in artifacts”.[13]
Such AI solution that can function as a ‘rational agent’ applies a form of logic reasoning and would be an agent that can act according to given guidelines (i.e. input) yet do so autonomously, adapt to environmental changes, work towards a goal (i.e. output) with the best achievable results (i.e. outcome) over a duration of time and this in a given (changing) space influenced by uncertainties. The application of this definition would not always result in a useful AI application. Some complex situations would, for instance, be better to respond to with a reflex rather than with rational deliberation. Think about a hand on a hot stove…[14]
#9 Artificial Intelligence methods as goal-oriented agents. Artificial Intelligence methods as goal-oriented agents. “Artificial Intelligence is the study of agents that perceive the world around them, form plans, and make decisions to achieve their goals. Its foundations include mathematics, logic, philosophy, probability, linguistics, neuroscience and decision theory.”[15]
#10 AI Defined by Specific Research and Development Methods. We can somewhat understand the possible meaning of the concept “AI” by looking at what some consider the different types or methods of AI, or the different future visions of such types of AI (in alphabetic order)[16]:
a system that knows what you are doing and
acts accordingly. For instance: it senses that you carry many bags, so it
automatically opens the door for you (without you needing to verbalize the
need).
A “fake” AI that simulates intelligence by
referencing (vast) data repositories and regurgitating the information at the
appropriate time. This system however does not learn.
A system that can communicate with humans via
text or speech giving the perception to the human (user) that it is itself also
human. Ideally it would pass the Turing test.
A system that mimics biological evolutionary processes: birth, reproduction, mutation, decay, selection, death, etc. (see a future blog post for more info)
A system of algorithms that learns from data sets and which is strikingly different from a traditional program (fixed by its code). (see a future blog post for more info)
A system that historically mimicked a brain ‘s structure and function (neurons in a network) though now are driven by statistical and signal processing. (see another of my blog post for more info here)
A system that applies a neural network to
operate in a or fuzzy logic as a non-linear logic, or a non-Boolean logic
(values between 0 or 1 and not only 0 or 1). It allows for further interpretation of vagueness and uncertainty
A system that has a general intelligence as a human does. This is also referred to as AGI or Artificial General Intelligence. This does not yet exist and might, if we continue to pursuit it, take decades to come to fruition. When it does it might start a recursive self-improvement and autonomous reprogramming, creating an exponential expansion in intelligence well beyond the confines of human understanding. (see a future blog post for more info)
A practical system of singular or narrow applications, highly focused on a problem that needs a solution via learning from given and existing data sets. This is also referred to as ANI or Artificial Narrow Intelligence.
Do you know any program or technological system that (already) fits this 5th definition?
How would you try to know whether or not it does?
Mini Project #___: Some Common Definitions of Ai with Examples
Team work + Q&A:
What is your team’s definition of AI?
What seems to be the most accepted definition in your daily-life community and in a community of AI experts closest to you?
Reading + Q&A:: Go through some popular and less popular definitions with examples
Discussion: which definition of AI feels more acceptable to your team; why? Which definition seems less acceptable to you and your team? Why? Has your personal and first definition of Ai changed? How?
Objectives: The learner can bring together the history, context, types and meaning of AI into a number of coherent definitions.
[1] Krohn, J., et al.(2019, p.102) the
importance of context in meaning-giving; NLP through Machine Learning and Deep
Learning techniques
[2] Retrieved from Ville Valtonen at Reaktor and
Professor Teemu Roos at the University of Helsinki’s “Elements of AI”, https://www.elementsofai.com/ , on December 12, 2019
[3]agent’ is
from Latin ‘agere’ which means ‘to manage’, ‘to drive’, ‘to conduct’, ‘to do’. To
have ‘agency’ means to have the autonomous ability and to be enabled to act /
do / communicate with the aim to perform a (collective) task.
[4] Haugeland, J. (Ed.). (1985). Artificial Intelligence: The Very Idea. Cambridge, MA: The MIT Press. p. 2 and footnote
#1.
[5] Russell, S. and Peter Norvig. (2016). Artificial Intelligence: A Modern Approach. Third Edition. Essex: Pearson Education. p.2
You do not have to say good-bye to those cheap tadalafil pills awkward canicule of abortive acclamation and lower self-esteem. Poor or weak erections are also the symptoms of your ED, talk to your doctor who may recommend you kamagra or click here cheap levitra. Heartburn and bloating are two common symptoms that one can expertise for diverticulitis is tenderness within the decrease left facet of the stomach which could be cheap levitra 20mg mild or abruptly flares up to severe pain. This is buying online viagra the same chemical released by the brain that is formed when one consumes drugs.
A beautiful and clearly-explained introduction to Neural Networks is offered in a 20 minute video by Grant Sanderson in his “3Blue1Brown” series.[1] One is invited to view this and his other enlightening pieces.
The traditional Artificial Neural Network (ANN)[2] is, at a most basic level, a kind of computational model for parallel computing between interconnected units. One unit could be given more or less numerical ‘openness’ (read: weight & bias)[3] then another unit, via the connections created between the units. This changing of the weight and the bias of a connection (which means, the allocation of a set of numbers, turning them up or down as if changing a set of dials), is the ‘learning’ of the network by means of a process, through a given algorithm. These changes (in weight and bias) will influence which signal will be propagated forwardly to which units in the network. This could be to all units (in a traditional sense) or to some units (in a more advanced development of a basic neural network, e.g. such as with Convoluted Neural Networks).[4] An algorithm processes signals through this network. At the input or inputs (e.g. the first layer) the data is split across these units. Each unit within the network can hold a signal (e.g. a number) and contains a computational rule, allowing activation (via a set threshold controlled by, for instance, a sigmoid function, or the recently more often applied “Rectified Linear Unit,” or ReLu for short), to send through a signal (e.g. a number) over that connection to a next unit or to a number of following units in a next layer (or output). The combination of all the units, connections and layers might allow the network to label, preferably correctly, the entirety of a focused-on object, at the location of the output or outputs layer. The result is that the object has been identified (again, hopefully, correctly or, at least, according to the needs).
The
signal (e.g. a number) could be a representation of, for instance, 1 pixel in
an image.[5]
Note, an image is digitally composed of many pixels. One can imagine many of
these so-called ‘neurons’ are needed to process only 1 object (consisting of
many pixels) in a snapshot of only the visual
data (with possibly other objects and backgrounds and other sensory information
sources) from an ever changing environment, surrounding an autonomous
technology, operated with an Artificial Neural Network. Think of a near-future
driverless car driving by in your street. Simultaneously, also imagine how many
neurons and even more connections between neurons, a biological brain, as part
of a human, might have. Bring to mind a human (brain) operating another car
driving by in your street. The complexity of the neural interconnected working,
the amount of data to be processed (and to be ignored) might strike one with
awe.
The
oldest form of such artificial network is a Single-layer Perceptron Network,
historically followed by the Multilayer Perceptron Network. One could argue
that ‘ANN’ is a collective name for any network that has been artificially made
and that has been exhibiting some forms and functions of connection between (conceptual)
units.
An
ANN was initially aimed (and still is) at mimicking (or modeling, or
abstracting) the brain’s neural network (i.e. the information processing
architecture in biological learning systems).
Though,
the term, Artificial Neural Network, contains the word ‘neural’, we should not
get too stuck on the brain-like implications of this word which is derived from
the word ‘neuron’. The word ‘neuron’ is not a precise term in the realm of AI
and its networks. At times
instead of ‘neuron’ the word ‘perceptron’ has been used, especially when
referring to as specific type of (early) artificial neural network using
thresholds (i.e. a function that allows for the decision to let a signal
through or not; for instance, the previously-mentioned sigmoid function).
Nevertheless, maybe some brainy context and association might spark an interest in one or other learner. It might spark a vision for future research and development to contextualize these artificial networks by means of analogies with the slightly more tangible biological world. After all, these biological systems we know as brains, or as nervous systems, are amazing in their signal processing potentials. A hint of this link can also be explored in Neuromorphic Engineering and Computing.
The
word “neuron” comes from Ancient Greek and means ‘nerve’. A ‘neuron,’ in
anatomy or biology at large, is a nerve cell within the nervous system of a
living organism (of the animal kingdom, but not sponges), such as mammals (e.g.
humans). By means of small electrical (electro-chemical) pulses (i.e. nerve
impulses), these cells communicate with other cells in the nervous system. Such
a connection, between these types of cells, is called a synapse. Note, neurons
cannot be found among fungi nor among plants (these do exchange signals, even
between fungi and plants yet, in different chemical ways)… just maybe they are
a steppingstone for one or other learner to imagine an innovative way to
process data and compute outputs!
The
idea here is that a neuron is “like a logic gate [i.e.
‘a processing element’] that receives
input and then, depending on a calculation, decides either to fire or not.”[6]
Here, the verb “to fire” can be understood as creating an output at the
location of the individual neuron. Also note, that a “threshold” is again
implied here.
An
Artificial Neural Network can then be defined as “…а computing ѕуѕtеm made up of a number of
ѕimрlе, highlу intеrсоnnесtеd рrосеѕѕing elements, which рrосеѕѕ infоrmаtiоn by
thеir dуnаmiс ѕtаtе response to еxtеrnаl inputs.”[7]
Remember,
‘neuron’, in an ANN, it should be underlined again, is a relatively simple
mathematical function. It is, in general, agreed that this function is analogous
to a node. Therefore, one can state that an Artificial Neural Network is built
up of layers of interconnected nodes.[8]
So, one can notice, in or surrounding the field of AI, that words such as unit,
node, neuron or perceptron used interchangeably, while these are not identical
in their deeper meaning. More recently the word “capsule” has been introduced,
presenting an upgraded version of the traditional ‘node,’ the latter equaling
one ‘neuron.’ Rather, a capsule is a node in a network equaling a collection of
neurons.[9]
A little bit of additional information on this can be found here below.
How could
an analogy with the brain be historically contextualized? In the early 1950s,
with the use of electron microscopy, it was proven that the brain exists of
cells, which preceding were labelled as “neurons”.[10]
It unequivocally showed the interconnectedness (via the neuron’s extensions,
called axons and dendrites) between these neurons, into a network of a large
number of these cells. A single of these type of locations of connection
between neurons has been labeled as a “synapse”.
Since
then it has been established that, for instance, the human cerebral cortex contains
about 160 trillion synapses (that’s a ‘160’ and another 12 zeros:
160000000000000) between about a 100 billion neurons (100000000000). Synapses
are the locations between neurons where the communication between the cells is
said to occur.[11] In
comparison some flies have about 100000 neurons and some worms a few hundreds.[12]
The brain is a “complex,
nonlinear, and parallel computer (information-processing system)”.[13]
The complexity of the network comes with the degree of interconnectedness (remember,
in a brain that’s synapses).
Whereas
it is hard for (most) humans to multiply numbers at astronomically fast speeds,
it is easy for a present-day computer. While it is doable for (most) humans to
identify what a car is and what it might be doing next, this is (far) less
evident for a computer to (yet) handle. This is where, as one of many examples,
the study and developments of neural networks (and now also Deep Learning) within
the field of AI has come in handy, with increasingly impressive results. The
work is far from finished and much can still be done.
The field of study of Artificial Neural Networks is widely believed to have started a bit earlier than the proof of connectivity of the brain and its neurons. It is said to have begun with the 1943 publication by Dr. McCulloh and Dr. Pitts, and their Threshold Logic Unit (TLU). It was then followed by Rosenblatt’s iterations of their model (i.e. the classical perceptron organized in a single layered network) which in turn was iterated upon by Minsky and Papert. Generalized, these were academic proposals for what one could understand as an artificial ‘neuron’, or rather, a mathematical function that aimed to mimic a biological neuron, and the network made therewith, as somewhat analogously found within the brain.[14]
Note,
the word ‘threshold’ is of use to consider a bit further. It implies some of
the working of both the brain’s neurons and of ANNs’ units. A threshold in
these contexts, implies the activation of an output if the signal crosses the
mathematically-defined “line” (aka threshold). Mathematically, this activation
function can be plotted by, for instance, what is known as a sigmoid function
(presently less used). The sigmoid function was particularly used in the units
(read in this case: ‘nodes’ or ‘neurons’ or ‘perceptrons’) of the first Deep
Learning Artificial Neural Networks. Presently, the sigmoid function is at
times being substituted with improved methods such as what is known as “ReLu”
which is short for ‘Rectified Linear Unit’. The latter is said to allow for
better results and is said to be easier to manage in very deep networks.[15]
Turning
back to the historical narrative, it was but 15 years later than the time of
the proposal of the 1943 Threshold Logic Unit, in 1958, with Rosenblatt’s invention
and hardware design of the Mark I
Perceptron —a machine aimed at pattern recognition in images (i.e. image
recognition) — that a more or less practical application of such network had
been built.[16] As
suggested, this is considered being a single-layered neural network.
This
was followed by a conceptual design from Minsky and Papert, considering the
multilayered perceptron (MLP), using a supervised learning technique. The name
gives it away, this is the introduction of the multi-layered neural network.
While hinting at nonlinear functionality,[17]
yet this design was still without the ability to perform some basic non-linear logical
functions. Nevertheless, the MLP was forming the basis for the neural network
designs as they are developed presently. Presently, Deep Learning research and
development has advanced beyond these models.
Simon
Haykin puts it with a slight variation in defining a neural network when he
writes that it is a “massively parallel distributed processor, made up of
simple processing units, that has a natural propensity for storing experiential
knowledge and making it available for use. It resembles the brain in two
respects: 1. Knowledge is acquired by the network from its environment through
a learning process. 2. Inter-neuron connection strengths, known as synaptic
weights, are used to store the acquired knowledge.”[18]
Let
us shortly touch on the process of learning in the context of an ANN and that
with a simplified analogy. One way to begin understanding the learning process,
or training, of these neural networks, in a most basic sense, can be done by
looking at how a network would (ignorantly) guess the conversion constant
between kilometers and miles without using algebra. One author, Tariq Rashid,
offers the following beautifully simple example in far more detail. The author
details an example where one can imagine the network honing in on the
conversion constant between, for instance, kilometers and miles.
Summarized
here: The neural network could be referencing examples. Let us, as a simple
example, assume it ‘knows’ that 0 km equals 0 miles. It also ‘knows’, from another
given example, that 100 km is 62.137 miles. It could ‘guess’ a number for the
constant, given that it is known that 100 (km) x constant = some miles. The
network randomly could, very fast, offer a pseudo-constant guessed as 0.5.
Obviously, that would create an error compared to the given example. In a
second guess it could offer 0.7. This would create a different kind of error.
The first is too small and the second is too large. The network consecutively
undershot and then overshot the needed value for the constant.
By
repeating a similar process, whereby a next set of numbers (= adjusted
parameters internal to the network) is between 0.5 and 0.7 with one closer to
the 0.5 and the others closer to 0.7, the network gets closer in estimating the
accurate value for its needed output (e.g. 0.55 and 0.65; then next 0.57 and
0.63, and so on). The adjusting of the parameters would be decided by how right
or wrong the output of the network model is compared to the known example that
is also known to be true (e.g. a given data set for training). Mr. Rashid’s
publication continues the gentle introduction into supervised training and
eventually building an artificial neural network.
In
training the neural network to become better at giving the desired output, the
network’s weights and biases (i.e. its parameters) are tweaked. If the output
has a too large an error, the tweaking processes is repeated until the error in
the output is acceptable and the network has turned out to be a workable model
to make a prediction or give another type of output.
In
the above example one moves forward and backward until the smallest reasonable
error is obtained. This is, again somewhat over-simplified how a
backpropagation algorithm functions in the training process of a network
towards making it a workable model. Note, “propagate” means to grow, extend, spread,
reproduce (which, inherently, are forward movements over time).
These
types of network, ANNs or other, are becoming both increasingly powerful and
diversified. They also are becoming increasingly accurate in identifying and
recognizing patterns in certain data sets of visual (i.e. photos, videos),
audio (i.e. spoken word, musical instruments, etc.) or other nature. These are
becoming more and more able to identify patterns, as well as humans are able to
and beyond what humans are able to handle.[19]
Dr.
HINTON, Geoffrey[20] is widely considered as one of the leading
academics in Artificial Neural Networks (ANNs) and specifically seen as a
leading pioneer in Deep Learning.[21]
Deep Learning, a type of Machine Learning, is highly dependent on various types
of Artificial Neural Network.[22]
Dr. Hinton’s student, Alex Krizhevsky, noticeably helped to boost the field of
computer vision by winning the 2012 ImageNet Competition and this by being the
first to use a neural network.
To round
the specific ‘ANN’ introduction up, let us imagine, perhaps in the processes of
AI research and specifically in its area similar to those of ANNs, solutions
can be thought up or are already being thought of that are less (or more)
brain-like or for which the researchers might feel less (or more) of a need to
make an analogy with a biological brain. Considering processes of innovation,
one might want to keep an open-mind to these seemingly different meanderings of
thought and creation.
Going
beyond the thinking of ANNs, one might want to fine-tune an understanding and
also consider diversity in forms and functions of these or other such networks.
There are, for instance, types going around with names such as ‘Deep Neural
Networks’ (DNNs) which, are usually extremely large and are usually applied to
process very large sets of data.[23]
One can also find terminologies such as the ‘Feedforward Neural Networks’
(FFNNs), which is said to be slightly more complex than the traditional and
old-school perceptron networks;[24]
‘Convolutional Neural Networks’ (CNNs), which are common in image recognition; ‘Recurrent
Neural Networks’ (RNNs) and its sub-type of ‘Long Short-term Memory’ networks
(LSTM), which apply feedback connection and which are used in Natural Language
Processing. These latter networks are claimed to still apply sigmoid functions,
contrary to the increased popularity of other functions.[25]
All of these and more are studied and developed in the fields of Machine
Learning and Deep Learning. All these networks would take us rather deep into
the technicalities of the field. You are invited to dig deeper and explore some
of the offered resources.
It
might be worthwhile to share that CNN solutions are particularly well-established
in computer vision. The neurons specialized in the visual cortex of the brain
and how these do or do not react to the stimuli coming into their brain region
from the eyes, were used as an inspiration in the development of the CNN. This
design helped to reduce some of the problems that were experienced with the
traditional artificial neural networks. CNNs do
have some shortcomings, as many of these cutting-edge inventions stil need to
be further researched and fine-tuned.[26]
In the process of improvement, innovation and fine-tuning, there are new networks continuously being invented. For instance, in answering some of the weaknesses of ‘Convolutional Neural Networks’ (CNNs), the “Capsule Networks (CapsNets)” are a relative recent invented answer, from a few years ago, by Hinton and his team.[27] It is also believed that these CapsNets mimic better how a human brain processes vision then what the CNNs have been enabled to offer up till now.
To put it too simple, it’s an improvement onto the previous versions of nodes in a network (a.k.a. ‘neurons’) and a neural network. It tries to “perform inverse graphics”, where inverse graphics is a process of extracting parameters from a visual that can identify location of an object within that visual. A capsule is a function that aids in the prediction of the “presence and …parameters of a particular object at a given location.”[28] The network hints at outperforming the traditional CNN in a number of ways such as the increased ability to identify additional yet functional parameters associated with an object. One can think of orientation of an object but also of its thickness, size, rotation and skew, spatial relationship, to name but a few.[29] Although a CNN can be of use to identify an object, it cannot offer an identification of that object’s location. Say a mother with a baby can be identified. The CNN cannot support the identification whether they are on the left of one’s visual field versus the same humans but on the right side of the image.[30] One might imagine the eventual use of this type of architectures in, for instance, autonomous vehicles.
This
type of machine learning method, a Generative Adversarial Network (GAN), was
invented in 2014 by Dr. Goodfellow and Dr. Bengio, among others.[31]
It’s
an unsupervised learning technique that allows to go beyond historical data
(note, it is debatable that, most if not all data is inherently historical from
the moment following its creation). In a most basic sense, it is a type of
interaction, by means of algorithms (i.e. Generative Algorithms), between two
Artificial Neural
Networks.
The GANs allow to create new data (or what some refer to as “lookalike data”)[32] by applying features, by means of certain identified features, from the historical referenced data. For instance, a data set, existing of what we humans perceive as images, and then of a specific style, can allow this GANs’ process to generate a new (set of) image(s) in the style of the studied set. Images are but one media. It can handle digital music, digitized artworks, voices, faces, video… you name it. It can also cross-pollinate between media types, resulting in a hybrid between a set of digitized artworks and a landscape, resulting in a landscape “photo” in a style of the artwork data set. The re-combinations and reshufflings are quasi unlimited. Some more examples are of GANs types are those that can
…allow for black and white imagery to be turned
into colorful ones in various visual methods and styles.[33]
…turn descriptive text of, say different birds
into photo-realistic bird images.[34]
…create new images of food based on their
recipes and reference images.[35]
…turn a digitized oil painting into a photo-realistic
version of itself; turning a winter landscape into a summer landscape, and so
on.[36]
If
executed properly, for instance, the resulting image could make an observer
(i.e. a discriminator) decide that the new image (or data set) is as (authentic
as) the referenced image(s) or data set(s) (note: arguably, in the digital or
analog world, an image or any other media of content is a data set).
It
is also a technique whereby two neural networks contest with each other. They
do so in a game-like setting as it is known in the mathematical study of models
of strategic decision-making, entitled “Game Theory.” Game Theory is not to be
confused with the academic field of Ludology, the latter which is the social,
anthropological and cultural study of play and game design. While often one
network’s gain is the other network’s loss (i.e. a zero-sum game), this is not
always necessarily the case with GANs.
It
is said that GANs can also function and offer workable output with relatively
small data sets (which ic an advantage compared to some other techniques).[37]
It
has huge applications in the arts, advertising, film, animation, fashion
design, video gaming, etc. These professional fields are each individually
known as multi-billion dollar industries. Besides entertainment it is also of
use in the sciences such as physics, astronomy and so
on.
One can learn how to understand and build ANNs online via a
number of resources. Here below are a
few hand-picked projects that might offer a beginner’s taste to the technology.
Project #___: Making Machine Learning Neural Networks (for K12 students by
Oxford University)
Project#___: Rashid, T. (2016). Make Your Own Neural Network.
A project-driven book examining the very basics of neural networks and aiding a learning step by step into creating a network. Published as eBook or paper via CreateSpace Independent Publishing Platform.
This might be easily digested by Middle Schools students or learners who cannot spend too much effort yet do want to learn about neural networks in an AI context.
[1] Schrittwieser, J. et al. (2020). Mastering Atari, Go, Chess and Shogi by
Planning with a Learned Model. Online: arXiv.org, Cornell University;
Retrieved on April 1, 2020 from https://arxiv.org/abs/1911.08265
[8] Rashid, T.
(2016). Make Your Own Neural Network.
CreateSpace Independent Publishing Platform
[9] Sabour, S. et al.
(2017). Dynamic Routing Between Capsules.
Online: arXiv.org, Cornell University; Retrieved on April 22, 2020 from
https://arxiv.org/pdf/1710.09829.pdf
[10] Sabbatini, R. (Feb
2003). Neurons and Synapses. The History
of Its Discovery. IV. The Discovery of the Synapse. Online:
cerebromente.org. Retrieved on April 23, 2020 from http://cerebromente.org.br/n17/history/neurons4_i.htm
[11] Tang
Y. et al (2001). Total regional and
global number of synapses in the human brain neocortex. In Synapse 2001;41:258–273.
[12] Zheng, Z., et al. (2018). A Complete Electron Microscopy Volume of the Brain of Adult Drosophila
melanogaster. In Cell, 174(3),
730–743.e22
[13] Haykin, S. (2008).
Neural Networks and Learning Machines. New York: Pearson Prentice Hall. p.1
[17] Samek, W. et al (2019). Explainable AI: Interpreting, Explaining
and Visualizing Deep Learning. Lecture Notes in Artificial Intelligence.
Switzerland: Springer. p.9
[21] Rumelhart,
David E.; Hinton, Geoffrey E.; Williams, Ronald J. (9 October 1986). “Learning
representations by back-propagating errors”. Nature. 323 (6088):
533–536
[23] de Marchi, L. et al.
(2019). Hands-on Neural Networks. Learn
How to Build and Train Your First Neural Network Model Using Python. Birmingham
& Mumbai: Packt Publishing. p. 9.
[24] Charniak, E. (2018). Introduction to Deep Learning.
Cambridge, MA: The MIT Press. p. 10
[26] Géron, A. (February,
2018). Introducing capsule networks. How
CapsNets can overcome some shortcomings of CNNs, including requiring less
training data, preserving image details, and handling ambiguity. Online:
O’Reilly Media. Retrieved on April 22, 2020 from https://www.oreilly.com/content/introducing-capsule-networks/
[32] Skanski, S. (2020). Guide to Deep Learning. Basic Logical, Historical and Philosophical Perspectives. Switzerland: Springer Nature. p. 127
[33] Isola, P. et al.
(2016, 2018). Image-to-Image Translation
with Conditional Adversarial Networks. Online: arXiv.org, Cornell University; Retrieved on April 16,
2020 from https://arxiv.org/abs/1611.07004
[34] Zhang, H. et al.
(2017). StackGAN: Text to Photo-realistic
Image Synthesis with Stacked Generative Adversarial Networks. Online: arXiv.org, Cornell University;
Retrieved
on April 16, 2020 from https://arxiv.org/pdf/1612.03242.pdf
[35] Bar El, O. et al.
(2019). GILT: Generating Images from Long
Text. Online: arXiv.org,
Cornell University; Retrieved
on April 16, 2020 from https://arxiv.org/abs/1901.02404
[36] Zhu, J. (2017).
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks. Online: arXiv.org,
Cornell University; Retrieved
on April 16, 2020 from https://arxiv.org/pdf/1703.10593.pdf
Games
(computer games and board games alike) have been used in AI research and
development since the early 1950s. Scientist and engineers focus on games to
measure certain stages of success in AI developments. Game settings form a
closed testing environment, as if it were a lab, within a specific set of rules
and steps. Games have a clear objective or a clear set of goals. Games also
allow to research and understand possible applications of probability (e.g.
calculate the chances of winning if certain parameters are met or followed).
Since very specific and focused problems need to be solved in specific game
architectures, games are ideal to test Narrow AI applications.
Narrow
AI solutions, are what have been achieved by scientist, so far, as opposed to a
‘General AI’ solution. A General AI solution (or ‘Strong AI’) would be a
super-intelligent construct able to solve many, if not any, humanly-thinkable
problem and / or beyond. The latter is still science fiction (until it is not).
The former, Narrow AI solutions, exists in many applications and can be tested
in a game setting. Results in such AI designs, within game play, can following
be transcoded into other areas (e.g. solutions for language translation, speech
recognition, weather forecast, sales predictions, autonomously operating
mechanical arms, managing efficiency in a country’s electric grid[1]
or other systems).
Some
Narrow AI solutions use a method of Machine Learning that is called “Reinforcement
Learning.” In simple terms, it is a way of learning by rewards or scoring. For
that reason too, games are an obvious environment that can be used infinitely,
to test and improve an AI application. Games lead to rewards or scores; one can
even win them.
Moreover,
a (computer) game can be played by multiple copies or versions of an AI
solution, speeding up the process to reach the best solution or strategy (to
win). The latter can, for instance, be achieved by means of “evolutionary
algorithms.” These are algorithms that
are improving themselves through, for instance, mutations or a process of selection as if through a
biological natural selection of the fittest (i.e. an autonomous selecting, by
means of a process, of a version or offspring of an algorithm that is better at
solving something, while ignoring another that is not). Though, if the AI a
plays a computer game that has a bug, it might exploit the bug to win, instead
of learning the game[2].
Chess
has been one of the first games, besides checkers, to have been approached by
the AI research community.[3]
As mentioned previously, in the mid-1950s Dr. SAMUEL, Arthur wrote a checkers
program. A few years earlier (circa 1951) trials were made to write
applications for both chess (by Dr. PRINZ, Dietrich) and checkers (by Dr.
STRACHEY, Christopher). While these earliest attempts are presently perhaps
dismissed as not really being a type of AI application (since, at times, some
coding tricks were used), in those days they were a modest, yet first, benchmark
of what was to come in the following decades.
For
instance, on May 11 1997, the computer named “Deep Blue” beat Mr. KASPAROV,[4]
the chess world champion of that time. A number of such achievements have
followed covering a number of games. Compared to today’s developments Deep Blue
is no longer that impressive. A few years ago, in 2014, by using a form of
Machine Learning, namely Deep Learning, AlphaGo defeated the world champion Mr.
Lee Sedol at Wéiqí (also known as the game of Go). That AI solution was later
surpassed, by AlphaGo Zero (aka AlphaZero). This system used yet another form
of Machine Learning, namely Reinforcement Learning (a method mentioned here
previously). This AI architecture played against itself and then against
AlphaGo. AlphaZero won all of the Wéiqí games from AlphaGo.
In
2017, LěngPūDàshī, the poker-playing AI, defeated some of the world’s top
players in the Texas Hold ‘Em poker game. Now scientists are trying to defeat
complex real-time online strategy video games players with AI solutions. While
such games might not often be taken seriously by some people, they are,
technically and through the lens of AI developments, far more complex then, for
instance, a chess game. Some successes have already been booked: On April 17,
2019 an AI solution defeated Dota 2 champions. Earlier that same year, human
players were defeated at a game of StarCraft II. Note, the same
algorithm that was trained to play Dota 2 can also be taught to move a
mechanical hand. Improvements, in benchmarking AI solutions with games,
do not stop.[5]
As blood glucose concentration rises, the pancreas secretes insulin to decrease the concentration of blood sugar online viagra australia and reduces cholesterol level. Vitamin E – Like vitamin C, this fat-soluble vitamin is a powerful antioxidant that works to fight diseases, as well cialis prices discount here as signs of early aging. So, men should be more careful about their sexual health and they picked pure natural herbs to take care of the fraud companies that can kill the enthusiasm of millions of such couples & deprive them of their sex. purchase female viagra If a product is promoted as something that can cure impotence sildenafil online india permanently and you can experience a normal sexual drive.
It has active components that help supply blood to penile organ. sildenafil 100mg tab Source When you read such ads again, tell yourself that it is not really that bad and you can learn to super cialis cheap live with it. This method goes well with their hectic timetable after and during school classes, while parents truly admire distance education the best viagra training. This herb is also known to enhance heart health in men, reduce oxygen damage to LDL or bad cholesterol in men, and protect them cialis 40 mg from the inside.
URLs for A “Pre-History” & a Foundational Context:
This post is the main post on a Pre-History & a Foundational context of the Field of AI. In this post a narrative is constructed surrounding the “Pre-History”. It links with the following posts:
The post here is a first and very short linking with on Literature, Mythology & Arts as one of the foundational contexts of the Field of AI
The
word ‘mathematics’ comes from Ancient
Greek and means as much as “fond of learning, study or knowledge”. Dr. Hardy, G.H. (1877 – 1947), a famous
mathematician, defined mathematics as the study and the making of patterns[1].
At least intuitively, as seen from these different perspectives, this might
make a link between the fields of Cognitive Science, AI and mathematics a bit
more obvious or exciting to some.
Looking
at these two simple identifiers of math, one might come to appreciate math in
itself even more but, also one might think slightly differently of “pattern recognition” in the field of “Artificial
Intelligence” and its sub-study of “Machine
Learning.”[2]
Following, one might wonder whether mathematics perhaps lies at the foundation
of machine or other learning.
Mathematics[3]
and its many areas are covering formal proof, algorithms, computation and
computational thinking, abstraction, probability, decidability, and so on. Many
introductory K-16 resources are freely accessible on various mathematical
topics[4]
such as statistics.[5]
Statistics,
as a sub-field or branch of mathematics, is the academic area focused on data
and their collection, analysis (e.g. preparation, interpretation, organization,
comparison, etc.), and visualization (or other forms of presentation). The
field studies models based on these processes imposed onto data. Some
practitioners argue that Statistics stands separately from mathematics.
These
following areas of study in mathematics (and more) lie at the foundation of
Machine Learning (ML).[6]
Yet, it should be noted, one never stops learning mathematics for specialized
ML applications:
See a future post for more perceptions on probability
Probability[9] Theory[10] which, is applied to make assumptions of a likelihood in the given data (Bayes’ Theorem, distributions, MLE, regression, inference, …);[11]
Markov[12] Chains[13] which model probability[14] in processes that are possibly changing from one state into another (and back) based on the present state (and not past states).[15]
Linear Algebra[16] which, is used to describe parameters and build algorithm and Neural Network structures;
Algebra for K-16[17]. Again, over-simplified, algebra is a major part of mathematics studying the manipulation of mathematical symbols with the use of letters, such as to make equations and more.
(Multivariate or multivariable) Calculus[20] which, is used to develop and improve learning-related attributes in Machine Learning.
Pre-Calculus & Calculus[21]: oversimplified, one can state that this is the mathematical study of change and thus also motion.[22] Note, just perhaps it might be advisable to consider first laying some foundations of (linear) algebra, geometry and trigonometry before calculus.
Multivariate (Multivariable) Calculus: instead of only dealing with one variable, here one focuses on calculus with many variables. Note, this seems not commonly covered within high school settings, ignoring the relatively few exceptional high school students who do study it.[23]
Vector[24] Calculus (i.e. Gradient, Divergence, Curl) and vector algebra:[25] of use in understanding the mathematics behind the Backpropagation Algorithm, used in present-day artificial neural networks, as part of research in Machine Learning or Deep Learning and the supervised learning technique.
Mathematical Series and Convergence, numerical methods for Analysis
Set Theory[26] or Type Theory: the latter is similar to the former except that the latter eliminates some paradoxes found in Set Theory.
Basics of (Numerical) Optimization[27] (Linear / Quadratic)[28]
Other: discrete mathematics (e.g. proof, algorithms, set theory, graph theory), information theory, optimization, numerical and functional analysis, topology, combinatorics, computational geometry, complexity theory, mathematical modeling, …
Additional: Stochastic Models and Time Series Analysis; Differential Equations; Fourier’s and Wavelengths; Random Fields;
Even More advanced: PDEs; Stochastic Differential Equations and Solutions; PCA; Dirichlet Processes; Uncertainty Quantification (Polynomial Chaos, Projections on vector space)
[1] Hardy. H.R. & Snow, C.P. (1941). A
Mathematician’s Apology. London: Cambridge University Press
[2] More on “pattern recognition” in the field of “Artificial Intelligence” and its sub-study of “Machine Learning” will follow elsewhere in future posts.
[3] Courant, R. et al. (1996). What Is Mathematics? An Elementary Approach to Ideas and Methods.
USA: Oxford University Press
The European Mathematical Society. (2012). The Encyclopedia of Mathematics. Online: Kluwer Academic Publishers. Retrieved on April 9, 2020 from https://www.encyclopediaofmath.org/
Khan, S. et al. (?). free K-16 Math and more. Online: Khan Academy. Retrieved on March 31, 2020 from https://www.khanacademy.org/math in Chinese 中文: https://zh.khanacademy.org/ One can also study Statistics (and probability) basics here.
Weisstein, E. (1995, 1999). Wolfram MathWorld. This is a famous math solutions provider offering free access to its math library via https://mathworld.wolfram.com/
[6] a sub-field in the field of Artificial Intelligence research and development (more details later in a future post). A resource covering mathematics for Machine learning can be found here:
URLs for A “Pre-History” & a Foundational Context:
This post is the main post on a Pre-History & a Foundational context of the Field of AI. In this post a narrative is constructed surrounding the “Pre-History”. It links with the following posts:
The post here is a first and very short linking with on Literature, Mythology & Arts as one of the foundational contexts of the Field of AI
Cognitive Science combines various fields of academic research
into one.[1]
This is therefore called an interdisciplinary field, or even more coherently
integrated into one: a transdisciplinary field with possibly the involvement of
non-academic participants.[2]
It touches on the fields of anthropology, psychology, neurology or neuro
sciences, biology, health sciences, philosophy, linguistics, computer sciences,
and so on.
The work by Roger Shepherd or by Terry Winograd[3] or David Marr, among many others, is considered to have been crucial in the development of this academic field.[4] It is also claimed that Noam Chomsky, as well as the founders of the field of AI, had a tremendous influence on the development of Cognitive Science.[5] The links between the field of Cognitive Science and the field of AI are noticeable in a number of research projects (e.g. see a future post on AGI) and publications.[6]
It is the field that scientifically studies the biological
“mental operations” (human and other) as well as the processes and its
attributes assigned to or associated with “thinking” and the acquisition of or
processes of “language”, “consciousness”, “perception”, “memory”, “learning”, “understanding”,
“knowledge”, “creativity”, “emotions”, “mind”, “intelligence”, “motor control,”
“vision,” models of intentional processes, the application of Bayesian methods
to mental processes or other intellectual functions.[7]
Any of these and related terms, through scientific lenses –while seemingly
obvious in meaning in a daily use– are very complex, if not debated or
contested[8].
The field does research and developments of the “mental architecture” which
includes a model both of “information
processing and of how the mind is organized.”[9]
Hence, the need for fields such as Cognitive Science. Since
these areas are implying different systems, the need for various fields (or
disciplines) being a source for Cognitive Science is not only inevitable, it is
necessary. The contexts of each individual system (or field, or discipline) is
potentially the core research area of a field covering another system. As
suggested above, this implies an overlap and integration of other systems (or
fields or disciplines, etc.) into one. Following, this requires an increased
scientific awareness and practice of inter-dependence between fields of
research.
Cognitive Science has developed advances in computational
modeling, the creation of cognitive models and the study of computational
cognition.[10]
The field of AI, through its history, found inspiration
in Cognitive Science for its study of artificial systems. One example is the
loose analogy with neurons (i.e. some of the cells making up a brain) and with
neural networks (i.e. the connection of such cells) for its mathematical models.
To some extent an AI researcher could take the models
distilled, following research in Cognitive Science, for their own research in
artificial systems. The bridge between the two are arguably the models and
specifically the mathematical models.
Simultaneously, researchers in Cognitive Science can also
use solutions found in the field of AI to conduct their research.
Research in Artificial General Intelligence (AGI)
partially aims to recreate functions and the implied processes with their
This is achieved by firstly inhibiting c-GMP molecules which causes release of nitric oxide in the penile tissues can lead to an outflow of blood from the heart to the body) and Veins (that carry blood back to the heart). generic viagra from usa cute-n-tiny.com There are many results that say that pharmacy online viagra http://cute-n-tiny.com/cute-animals/my-cute-new-kitten/attachment/lilububbles/ knowing the reason for erection along with the usage of kamagra tablets. This type of ED in men with 30s last for a few tadalafil 5mg days only and would not need any sort of medical assistance. It viagra mastercard españa really is through this manner that human being is capable of reproduce. output, which Cognitive Science studies in biological neural networks (i.e.
brains).
Some have argued that the field of AI is a sub-field of
the field of Cognitive Science, many do not subscribe to this notion. [11]
The argument has been made since in the field of AI one can find the research
of processes that are innate to the processes found in a brain: sound pattern
recognition, speech recognition, object recognition, gesture recognition, and
so on which are in turn studied in other fields, such as Cognitive Science. It
is more commonly agreed that AI is a sub-field of Computer Science. Still, as
stated in the opening lines of this chapter, many do agree with the strong
interdisciplinary or transdisciplinary links between the two.[12]
[3] He conducted some of his work at the Artificial
Intelligence Laboratory, a Massachusetts Institute of Technology (MIT) research
program. See Winograd, T. (1972). Understanding Natural Language. In Cognitive
Psychology; Volume 3, Issue 1, January 1972, pp. 1 – 191. Boston: MIT; Online”
Elsevier. Retrieved on March 25, 2020 from https://www.sciencedirect.com/science/article/abs/pii/0010028572900023
[10] Houdé, O., et al (Ed.). (2004). Dictionary of cognitive science;
neuroscience, psychology, artificial intelligence, linguistics, and philosophy.
New York and Hove: Psychology Press;
Taylor & Francis Group.
[11] Zimbardo, P., et al. (2008). Psychologie. München: Pearson Education.
URLs for A “Pre-History” & a Foundational Context:
This post is the main post on a Pre-History & a Foundational context of the Field of AI. In this post a narrative is constructed surrounding the “Pre-History”. It links with the following posts:
The post here is a first and very short linking with on Literature, Mythology & Arts as one of the foundational contexts of the Field of AI
When thinking at a daily and personal level, one can observe that
one’s body, the human body’s physiology, seemingly has a number of controls in
place for it to function properly.
Humans, among many other species, could be observed showing
different types of control. One can observe control in the biological acts
within the body. For instance, by the physiological nature of one’s body’s processes;
be they more or less autonomous or automatic processes. Besides, for instance,
the beating process of the heart, or the workings of the intestines, one could
also consider processes within, e.g. the brain and those degrees of control
with and through the human senses.
Humans also exert control by means of, for instance, their
perceptions, their interpretations, and by a set of rituals and habitual constraints
which in turn might be controlled by a set of social, cultural or in-group norms, rules, laws and other
external or internalized constraints.
Really broadening one’s view onto
‘control’: one can find the need for some form and degree of control not only
within humans but also in any form of life; in any organism. In effect, to be
an organism is an example of a system of cells working together, in an
organized and cooperative manner, instrumental to their collective survival as
unified into the organism. Come to think of it, an organism can be considered
sufficiently organized and working, if some degrees and some forms of shared, synchronized
control is underlying their cooperation.
Interestingly enough, to some
perhaps, such control is shared, within the organism, with colonies of
supportive bacteria; its microbiome (e.g. the human biome). [1]
While this seems very far from the topic of this text at the same time,
analogies and links between Control Theory, Machine Learning and the biological
world are at the foundation of the academic field of AI.[2]
If one were to somewhat abstract
the thinking on the topic of ‘control’, then these controlling systems could be
seen as a support towards learning from sets of (exchanged) information or data.
These systems might engage in such acts of interchanged learning, with possibly
the main aim to sustain forms and degrees of stability, through adaptations,
depending on needs and contextual changes. At the very least, the research
surrounding complex dynamic systems can use insights in both Control Theory and
consequentially, the processing potentials as promised within Machine Learning.
Control could imply the constraining of the influence of certain
variables or attributes within, or in context of, a certain process. One
attribute (e.g. a variable or constant) could control another attribute and
vice versa. These interactions of attributes could then be found to be
compounded into a more complex system.
Control seems most commonly allowing for the reduction of risks
and could allow for a given form and function (not) to exist. The existence of
a certain form and function of control can allow for a system (not) to act
within its processes.
When one zooms in and focuses, one can consider that perhaps
similar observations and reflections have brought researchers to constructing
what is known as “Control Theory.”
Control Theory is the mathematical field that studies control in
systems. This is through the creation of mathematical models, and how these
dynamic systems optimize their behavior by controlling processes within a given,
influencing environment.[3]
Through mathematics and engineering it allows for a dynamic system
to perform in a desired manner (e.g. an AI system, an autonomous vehicle, a
robotic system). Control is exercised
over the behavior of a system’s processes of any size, form, function or
complexity. Control, as a sub-process, could be inherent to a system itself,
controlling itself and learning from itself.
In a broader sense, Control Theory[4],
can be found in a number of academic fields. For instance, it is found in the
field of Linguistics with, for instance, Noam Chomsky[5]
and the control of a grammatical contextual construct over a grammatical
function. A deeper study of this aspect, while foundational to the fields of
Cognitive Science and AI, is outside of the introductory spirit of this section.
As an extension to a human and
their control within their own biological workings, humans and other species
have created technologies and processes that allow them to exhort more
(perceived) control over certain aspects of (their perceived) reality and their
experiences and interactions within it.
Looking closer, as it is found in
the area of biology and also psychology, with the study of an organism’s processes
and its (perceptions of) positive and negative feedback loops. These control
processes allow a life form (control of its perception of) maintaining a
balance, where it is not too cold or hot, not too hungry and so on; or to act
on a changing situation (e.g. start running because fear is increasing).
As one might notice, “negative”
is not something “bad” here. Here the word means that something is being
reduced so that a system’s process (e.g. heat of a body) and its balance can be
maintained and stabilized (e.g. not too cold and not too hot). Likewise, “Positive”
here does not (always) mean something “good”. It means that something is being
increased. Systems using these kinds of processes are called homeostatic systems.[6]
Such systems, among others, have been studied in the field of Cybernetics;[7]
the science of control.[8]
This field, in simple terms, studies how a system regulates itself through its
control and the communication of information[9]
towards such control.
These processes (i.e. negative and
positive feedback loops) can be activated if a system predicts (or imagines)
something to happen. Note: here is a loose link with probability, thus with
data processing and hence with some processes also found in AI solutions.
In a traditional sense, a loop in
engineering and its Control Theory could, for instance, be understood as
open-loop and closed-loop control. A closed loop control shows a feedback
function. This feedback is provided by
means of the data sent from the workings of a sensor, back into the system,
controlling the functioning of the system (e.g. some attribute within the
system is stopped, started, increased or decreased, etc.).
A feedback loop is one control
technique. Artificial Intelligence applications, such as with Machine Learning
and its Artificial Neural Networks can be applied to exert degrees of control
over a changing and adapting system with these, similar or more complex loops. These
AI methods too, use applications that found their roots in Control Theory.
These could be traced to the 1950s with the Perceptron system (a kind of
Artificial Neural Network), built by Rosenblatt.[10] A
number of researchers in Artificial Neural Networks and Machine Learning in
general found their creative steppingstones in Control Theory.
The field of AI has links with
Cognitive Science or with some references to brain forms and brain functions
(e.g. see the loose links with neurons). Feedback loops, as they are found in
biological systems, or loops in general, have consequentially been referenced
and applied in fields of engineering as well. Here, associated with the field
of AI, Control Theory and these loops, are mainly referring to the associated
engineering and mathematics used in the field of AI. In association with the
latter, since some researchers are exploring Artificial General Intelligence
(AGI), it might also increasingly interest one to maintain some degree of
awareness of these and other links between Biology and Artificial Intelligence
as a basis for sparking one’s research and creative thinking, in context.
[2] See for instance, Dr. Liu, Yang-Yu (刘洋彧). “…his
current research efforts focus on the study of human microbiome from the
community ecology, dynamic systems and control theory perspectives. His recent
work on the universality of human microbial dynamics has been published in
Nature…” Retrieved on April 13, 2020 from Harvard University, Harvard
Medical School, The Boston Biology and Biotechnology (BBB) Association, The
Boston Chapter of the Society of Chinese Bioscientists in America (SCBA; 美洲华人生物科学学会: 波士顿分会)
at https://projects.iq.harvard.edu/bbb-scba/people/yang-yu-liu-%E5%88%98%E6%B4%8B%E5%BD%A7-phd
and examples of papers at https://scholar.harvard.edu/yyl
[4] Manzini M. R. (1983). On Control and Control Theory. In Linguistic Inquiry, 14(3),
421-446. Information Retrieved April 1, 2020, from www.jstor.org/stable/4178338
[5] Chomsky, N. (1981, 1993). Lectures on Government and Binding. Holland: Foris Publications.
Reprint. 7th Edition. Berlin and New York: Mouton de Gruyter,
[6] Tsakiris, M. et al. (2018). The Interoceptive Mind: From Homeostasis to Awareness. USA: Oxford
University Press
[7] Wiener, N. (1961). Cybernetics: or the Control and Communication in the Animal and the
Machine: Or Control and Communication in the Animal and the Machine.
Cambridge, MA: The
MIT Press
[9] Kline, R. R. (2015). The Cybernetics Moment: Or Why We Call Our Age the Information Age. New
Studies in American Intellectual and Cultural History Series. USA: Johns
Hopkins University Press.
[10] Goodfellow, I., et al. (2017). Deep Learning. Cambridge, MA: MIT Press. p. 13
The12$ that Pfizer charges for tadalafil cheap in US cannot weigh against the Rs.594 they charge in India due to the lesser GDP and the way local medicines are priced in India. Jamentz notes that managers must be able to levitra viagra cialis recognize whether it hits you really or not. Your wife is dressed very sexy and she knows that you cialis generic no prescription are big fan of her lingerie. Many men nowadays are going through some cheap sildenafil uk worst problems such as too much smoking, work load, stress, etc.
Image Caption:
“A typical, single-input, single-output feedback loop with descriptions for its various parts.”
Image source:
Retrieved on March 30, 2020 from here License & attribution: Orzetto / CC BY-SA (https://creativecommons.org/licenses/by-sa/4.0)
URLs for A “Pre-History” & a Foundational Context:
This post is the main post on a Pre-History & a Foundational context of the Field of AI. In this post a narrative is constructed surrounding the “Pre-History”. It links with the following posts:
The post here is a first and very short linking with on Literature, Mythology & Arts as one of the foundational contexts of the Field of AI
In the early days of philosophy (while often
associated with the Ancient Greeks, surely found in other comparable forms in
many intellectual, knowledge-seeking communities throughout history) and up
till present days, people create forms of logic, they study and think about the
(existence, developments, meaning, processes, applications, … of) mind,
consciousness, cognition, language, reasoning, rationality, learning, knowledge,
and so on.
Logic too,
often has been claimed as being an Old Greek invention; specifically by
Aristotle (384 B.C to 322 B.C.). It has, however, more or less independent
traditions across the globe and across time. Logic lies at the basis of, for
instance, Computational Thinking, of coding, of mathematics, of language, and
of Artificial Intelligence. In its most basic (and etymologically), logic comes
from Ancient Greek “Logos” (λόγος), which simply means “speech”, “reasoning”,
“word” or “study”. Logic can, traditionally, be understood as “a method of
human thought that involves thinking in a linear, step-by-step manner about how
a problem can be solved. Logic is the basis of many principles including the
scientific method.”[1]Note, following the result of research and development (R&D) in fields
that could be associated with the field of AI and within the field of AI
itself, can show that today logic, in its various forms, is not only a linear
process. Moreover, at present, the study of logic has been an activity no
longer limited to the field of philosophy alone and is studied in various
fields including computer science, linguistics or cognitive science as well.
One author covering a topic of AI, tried to
make the link between Philosophy and Artificial Intelligence starkly clear. As
a discipline, AI is offered the consideration as possibly being “philosophical
engineering.”[2]
In this linkage, the field of AI is positioned as one researching more philosophical
concepts from any field of science and from Philosophy itself that are then transcoded,
from mathematical algorithms to artificial neural networks. This linkage
proposes that philosophy covers ideas that are experienced as, for instance, ambiguous
or complex or open for deep debate. Historically, philosophy tried to define, or
at least explore, many concepts including ‘knowledge,’ ‘meaning,’ and ‘reasoning,’
which are broadly considered to be processes or states of a larger set known as
“intelligence”. The latter itself too has been a fertile topic for philosophy.
The field of AI as well has been trying to explore or even solve some of these
attributes. The moment it solved some expressions of these, it was often
perceived as taking away not only the mystery but also the intelligence of the
expressed form. The first checker or chess “AI” application is hardly
considered “intelligent” these days. The first AI solution beating a champion
in such culturally established board games has later been shown to lack
sufficient “intelligence” to beat a newer version of an AI application. Maybe
that improved version might (or will) be beaten again, perhaps letting the AI
applications race on and on? Just perhaps contrary to “philosophical
engineering,” would the field of AI be practically engineering the philosophy out of some concepts?
Find out what “algorithm” in general (in a more non-mathematical or more non-coding sense) means. Can you find it has similarities with the meaning of “logic”? If so, which attributes seem similar?
What do you think ‘algorithm’ could mean and could be in daily life (outside of the realm of Computer Science)? Are there algorithms we use that are not found in a computer?
Collect references of what consciousness, intelligence, rationality, reasoning and mind have meant in the history of the communities and cultures around you.
Share your findings in a collection of references from the entire class.
Maybe add your findings to the collage (see the Literature project above).
Alternative: the teacher shares a few resources or references of philosophers that covered these topics and that are examples of the pre-history of AI.
Psychology has influenced and is
influenced by research in AI. To some degree and further developing this is
still the case today.[3]
Not only as a field related to cognitive science and the
study of the processes involving perception and motor control (i.e. control of
muscles and movement) but also the experiments and findings from within the
longer history of psychology, have been of influence in the areas of AI.
It is important to note that while there are links
between the field of AI and psychology, some attributes in this area of study have
been contested, opposed and surpassed by cognitive science and computer
science, with its subfield of AI.
An example of a method that can be said to have found its
roots in psychology is called “Transfer Learning”. This refers to a process or method learned within one
area that is used to solve an issue in an entirely different set of conditions.
For a machine the area and conditions are the data sets and how its artificial
neural network model is being balanced (i.e. “weighted”). The machine uses a
method acquired in working within one data set to work in another data set. In
this way the data set does not have to be sufficiently large for the machine to
return workable outputs.
Anything that interrupts a man taking pleasure of a physical intimacy and gets both persons disappointed. best female viagra First you need to select a dose of your own, you may search for internet generic viagra 100mg sites where lots of numbers and e-mail addresses are available. You cialis 40 mg know your partner inside out. It increases the nitric oxide level in body and curbs product of an enzyme PDE5 and emits cGMP in blood. cGMP purchase levitra why not try these out enzyme is believed indispensable for smooth blood flow in direction of male organ.
The AI method known as Reinforcement Learning is one that
could be said to have some similarities with experiments such as those
historically conducted by Pavlov and B.F. Skinner. With Pavlov the process of “Classical
Conditioning” was introduced. This milestone in the field of psychology is most
famously remembered with Pavlov’s dog that started to produce saliva the moment
it heard a sound of a bell (i.e. the action which Pavlov desired to observe).
This sound was initially associated with the offering of food; first the bell
was introduced, then the food and then the dog would produce saliva. Pavlov
showed that the dog indeed did link both the bell and the food. Eventually, the
dog would produce saliva at the hearing of the bell without getting any food. What
is important here is that the dog has no control over the production of saliva.
That means the response was involuntary; it was automatic. This is, in an
over-simplified explanation, Classical Conditioning.
That stated, Reinforcement Learning (RL) is a Machine
Learning method, where the machine is confronted with degrees of “reward” or
the lack thereof. See the section on RL for further details. Studies
surrounding reward have been found in historical research conducted by a
researcher named Skinner and others. It’s interesting to add that this research
has been contested by Chomsky, questioning the scientific validity and
transferability to human subjects.[4]
Chomsky’s critique has been considered as important in the growth of the fields
of cognitive science and AI, back then in the 1950s. In these experiments a
process called “Operant Conditioning” was being tested. The researchers were
exploring voluntary responses (as opposed to the involuntary ones seen with
Pavlov). That is to say, these were responses that were believed to be under
the control of the test subject and that would lead to some form of learning,
following some form of reward.
Again, these descriptions are too simplistic. They are
here to nudge you towards further and deeper exploration, if this angle were to
excite you positively towards your learning about areas in the academic field
of AI.
With
Linguistics come the studies of semiotics. Semiotics could be superficially defined
as the study of symbols and various systems for meaning-giving including and
beyond the natural languages. One can think of visual languages, such as icons,
architecture, or another form is music, etc. Arguably, each sense can have its
own meaning-giving system. Some argue that Linguistics is a subfield of
semiotics while again others turn that around. Linguistics also comes with
semantics, grammatical structures (see: Professor Noam Chomsky and the Chomsky
Hierarchy)[5],
meaning-giving, knowledge representation and so on.
Linguistics
and Computer Science both study the formal properties of language (formal,
programming or natural languages). Therefor any field within Computer Science,
such as Artificial Intelligence, share many concepts, terminologies and methods
from the fields within Linguistics (e.g. grammar, syntax, semantics, and so on).
The link between the two is studied via a theory known as the “automata theory”[6],
the study of the mathematical properties of such automata. A Turing Machine is
a famous example of such an abstract machine model or automaton. It is a
machine that can take a given input by executing some rule, as expressed in a
given language and that in a step by step manner; called an algorithm, to end
up offering an output. Other “languages” that connects these are, for instance,
Mathematics and Logic.
Did you
know that the word “automaton” is from Ancient Greek and means something like “self-making”, “self-moving”, or “self-willed”?
That sounds like some attributes of an idealized Artificial Intelligence
application, no?
[2] Skanski, S. (2018). Introduction to Deep Learning.
From Logical Calculus to Artificial Intelligence. In Mackie, I. et al.
Undergraduate Topics in Computer Science Series (UTiCS). Switzerland: Springer.
p. v . Retrieved on March 26, 2020 from http://www.springer.com/series/7592 AND https://github.com/skansi/dl_book
[3] Crowder, J. A. et al. (2020). Artificial Psychology: Psychological Modeling and Testing of AI Systems.
Springer
[4] Among other texts, Chomsky, N. (1959). Reviews: Verbal behavior by B. F. Skinner.
Language. 35 (1): 26–58. A 1967 version retrieved on March 26, 2020 from https://chomsky.info/1967____/
[5] Chomsky, N. (1956). Three models for the description of language. IEEE Transactions on
Information Theory, 2(3), 113–124. doi:10.1109/tit.1956.1056813 AND Fitch, W.
T., & Friederici, A. D. (2012). Artificial
grammar learning meets formal language theory: an overview. Philosophical
Transactions of the Royal Society B: Biological Sciences, 367(1598), 1933–1955.
doi:10.1098/rstb.2012.0103
[6] The automata theory is the study of abstract
machines (e.g. “automata”, “automatons”; notice the link with the word
“automation”). This study also considers how automata can be used in solving
computational problems.
URLs for A “Pre-History” & a Foundational Context:
This post is the main post on a Pre-History & a Foundational context of the Field of AI. In this post a narrative is constructed surrounding the “Pre-History”. It links with the following posts:
The post here is a first and very short linking with on Literature, Mythology & Arts as one of the foundational contexts of the Field of AI
The doctor can show you whether there is any narrowing or blockages of cardiovascular passages in your central nervous system. http://secretworldchronicle.com/tag/veronica-giguere/ cialis uk A lot of men have problems with your sex life or impotence, you’ll probably find help in online levitra india special exercises known as Kegel exercises. How to treat sexual weakness problem in men is a condition the most prevalent kinds of sexual debilitation. buy cheap levitra The patented technology from Bathmate allows you to create erections faster, and you will soon get the ability to succeed in inhibiting the PDE5 enzyme levitra low price so as to enhance the ability to cause improvement in blood circulation.Consumption doses involve 3 types: 25 mg, 50 mg and 100 mg.
01 — The Field of AI: A Foundational Context: Literature, Mythology, Visual Arts
The early Greek Myths (about 2500 years ago) showcase
stories of artificially intelligent bronze automatons or statues that were
brought to life which then in turn exhibited degrees of “intelligence”. If you
want to dig deeper search for Pygmalion’s Galateia, or look up the imaginary
stories of Talos (Talus)[1].
The many
thousands years old Jewish myth of the “golem” (גולם),
fantasized about a creature made of clay that magically came to life. It could
be interpreted as an imagination of the raw material for a controllable
automaton and an artificial form of some degree of intelligence. While its
cultural symbolism is far richer than given justice here, it could be imagined
as symbolizing a collective human capability to envision giving some form and
function of intelligence to materials that we, in general, do not tend to
equate with comparable capability (i.e. raw materials for engineered design).
It is
suggested in some sources[2]
that artificial intelligence (in the literary packaging of imagined automatons
or other) was also explored in European literary works such as in the 1816
German Der Sandmann (The Sandman) by Ernst Theodor Amadeus Hoffman,[3] with the story’s character Olympia.
The artificial is also explored by the fictional character Dr. Wagner, who
creates Homunculus (a little man-like automaton), in Faust by Goethe,[4] and
in Mary Shelley’s Frankenstein.[5]Much earlier yet, far less literary and rather philosophically, the
artificial was suggested in the 1747 publication entitled L’Homme Machine (Man—Machine) by the French Julien Offray
de la Mettrie, who posited the hypothesis that a human being as any other animal,
are automatons or machines.
The next post will cover some hints of Philosophy in association with the Field of AI
Mini
Project #___: Exploring the Pre-History of AI in your own and your larger
context.
Collect any other old stories from within China, Asia or elsewhere (from a location or culture that is not necessarily your own) that reference similar imaginations of “artificial intelligence” as constructed in the creative minds of our ancestors.
Share your findings in a collection of references from the entire class.
Maybe make a large collage that can be hung up on the wall, showing “artificial intelligence” from the past, through-out the ages.
Alternative: the teacher shares a few resources or references from the Arts (painting, sculpture, literature, mythology, etc.) that covered these topics and that are examples of the pre-history of AI.
[1] Parada, C. (Dec 10, 1993). Genealogical Guide to Greek Mythology. Studies in Mediterranean
Archaeology, Vol 107. Coronet Books
[2] McCorduck, P. (2004). Machines Who Think: A Personal Inquiry into the History and Prospects
of Artificial Intelligence. Natick: A K Peters, Ltd. p. xxv
[4] Nielsen, W. C. (2016). Goethe, Faust, and Motherless Creations. Goethe Yearbook, 23(1),
59–75. North American Goethe Society.
Information retrieved on April 8, 2020 from https://muse.jhu.edu/article/619344/pdf
[5] An artistic interpretation of the link between the
artificial life of the Frankenstein character and AI is explored here: http://frankenstein.ai/ Retrieved on April 8, 2020