Category Archives: The field of AI

<< The Inscrutable Human(made) Algorithms >>


There is consensus that avoiding inscrutable [here meant as the opposite of explainable] algorithms is a praiseworthy goal. We believe there are legal and ethical reasons to be extremely suspicious of a future, where decisions are made that affect, or even determine, human well-being, and yet no human can know anything about how those decisions are made. The kind of future that is most compatible with human well-being is one where algorithmic discoveries support human decision making instead of actually replacing human decision making. This is the difference between using artificial intelligence as a tool to make our decisions and policies more accurate intelligent and humane, and on the other hand using AI as a crutch that does our thinking for us.” (Colaner 2019) 

If one took this offered opportunity for contextualization and were to substitute “algorithms” —here implied as those used in digital computing— by *algorithms* of human computing [*1], one could have interesting reflections about our human endeavor in past, present and future: indeed, folks, “…decisions are [and have been] made that affect or even determine human well-being, and yet no human can know anything about how those decisions are made…” Can you think of anything in our species’ past and present that has been offering just that? 

“our” is not just you nor me. It is any sample, or group fitted with a feature, or it is the entire population, and it is life, to which this entirety belongs. More or less benevolent characters, out of reach of the common mind-set yet reaching outward to set minds straight, have been part of the human narrative since well-before the first cave paintings we have rediscovered until now.  As, and beyond as, UNESCO recently suggested for K-12 AI Curricula:

Contextualizing data: Encourage learners to investigate who created the dataset, how the data was collected, and what the limitations of the dataset are. This may involve choosing datasets that are relevant to learners’ lives, are low-dimensional and are ‘messy’ (i.e. not cleaned or neatly categorizable).” (UNESCO 2022, p. 14)

By enfranchising the ethics of AI we might want to consider this endeavor might, in parallel of its acknowledged urgency and importance, perhaps distract from our human inscrutable algorithms. 

For instance, should I be concerned that I am blinding myself with the bling of reflective externalities (e.g. a mobile cognitive extension), muffling pause and self-reflection even more besides those human acts that already muffle human discernment?

The intertwined consideration of technology might best come with the continued consideration of the human plight; and any (life) form as well as their environment.

Could we improve how we each relate to each other via our mediations such as processes labeled as artificially intelligent? Various voices through various lenses seem to think so. 

Perhaps the words narrated by Satya Nadella, CEO of Microsoft, while referring to John Markoff, suggest similar considerations, which might find a place at the table of the statisticians of future narratives:  

I would argue that perhaps the most productive debate we can have isn’t one of good versus evil: The debate should be about the values instilled in the people and institutions creating this technology. In his book Machines of Loving Grace, John Markoff writes, “The best way to answer the hard questions about control in a world full of smart machines is by understanding the values of those who are actually building these systems.” It’s an intriguing question, and one that our industry must discuss and answer together.” (Nadella Via Slate.com 2016) 

Through such contextualizing lens, the issue is provokingly unrelated with silicon-based artificialities (alone), in that it is included in human relations, predating what we today herald as smart or synthetic hints toward intelligence.  

Might hence the answers toward AI ethics lie in not ignoring the human processes that create both AI’s and ethics’ narratives? Both are created by actors ensuring that  “decisions are made that affect, or even determine, human well-being.” That said, we might put to question whether dialog or human discernment is sufficiently accessible to a majority, struggling with the “algorithms” imposed on their daily unpolished lives. Too many lives have too little physical and mental breathing space to obtain the mental models (and algorithms); let alone to reflect on their past, present and surely less their opaqued futures as told by others both in tech and ethics.

One (eg tech) needs to more so accompany the other (eg ethics), in a complex system thinking, acting, and reflecting that can also be transcoded to other participants (eg any human life anywhere). Our human systems (of more or less smartness) might need a transformation to allow this to the smallest or weakest or least represented denominator; which can not be CEOs or academics or experts alone. They neither should be our “crutches” as the above quote suggests for AI.  

You and I can continue with our proverbial “stutters” of shared (in)competence yet, with openness, to question, listen and learn:

“is my concern, leading my acts, in turn improving our human relations? Can I innovate my personal architecture right now, right here, while also considering how these can architect the technologies that are hyped to delegate smartness ubiquitously? How can I participate in better design? What do I want to delegate to civilization’s architectures, be they of the latest hype or past designs? Can you help me asking ‘better’ questions?”

…and much better dialog that promotes, to name but a few: explainability (within, around and beyond AI and ethics), transparency (within, around and beyond AI and ethics), perspectives, low barrier to entry, contextualization (of data and beyond). (UNESCO 2022 p. 14)


[*1] e.g.: mental models and human narratives, how we relate to our selves and others in the world, how we are offering strategies and constraints in thinking and acting (of others), creating assumed intent, purpose or aim, yet which possibly, at times, are lacking intent, and might be loosely driven by lack of self-reflective abilities, in turn delegated to hierarchical higher powers of human or Uber-human forms in our fear of freedom (Fromm 1942)

References

Bergen, B. (2012). Louder than Words: The New Science of how the Mind makes Meaning. New York: Basic Books

Fromm, E. (1942). The Fear of Freedom. UK: Routledge.

Miao, F., Unesco et al. (2022). K-12 AI curricula: a mapping of government-endorsed AI curricula. Online: UNESCO retrieved from https://www.researchgate.net/profile/Fengchun-Miao/publication/358982206_K-12_AI_curricula-Mapping_of_government-endorsed_AI_curriculum/links/6220d1de19d1945aced2e229/K-12-AI-curricula-Mapping-of-government-endorsed-AI-curriculum.pdf AND .https://unesdoc.unesco.org/ark:/48223/pf0000380602

Jordan, M.I. via Kethy Pretz (31 March 2021 ). Stop Calling Everything AI, Machine-Learning Pioneer Says Michael I. Jordan explains why today’s artificial-intelligence systems aren’t actually intelligent. Online: IEEE Spectrum. Retrieved from https://spectrum-ieee-org.cdn.ampproject.org/c/s/spectrum.ieee.org/amp/stop-calling-everything-ai-machinelearning-pioneer-says-2652904044

Markoff, J. (2015). Machines of Loving Grace. London: Harper Collins

Nadella, S. (June 28, 2016 2:00PM).
Microsoft’s CEO Explores How Humans and A.l. Can Solve Societys Challenges-Together. Online: Slate Retrieved from: https://slate.com/technology/2016/06/microsoft-ceo-satya-nadella-humans-and-a-i-can-work-together-to-solve-societys-challenges.html

Olsen, B., Eva Lasprogata, Nathan Colaner. (8 April 2019). Ethics and Law in Data Analytics”. Online Videos: LinkedIn Learning: Microsoft Professional Program. Retrieved on Apr 12 2022 fromhttps://www.linkedin.com/learning/ethics-and-law-in-data-analytics/negligence-law-and-analytics?trk=share_ios_video_learning&shareId=tGkwyaLOStm9VK/kXii9Yw==

Weinberger, D. (April 18, 2017). Our Machines Now Have Knowledge We’ll Never Understand. Online: Wired. Retrieved from https://www.wired.com/story/our-machines-now-have-knowledge-well-never-understand/

…and the irony of it all:

This is then translated by a Bot …: https://twitter.com/ethicsbot_/status/1514049150018539520?s=21&t=-xS7r4VMmBqTl6QidyMtng

Header visual : animasuri’22 ; a visual poem of do’s and don’t”

This post on LinkedIn: https://www.linkedin.com/posts/janhauters_the-inscrutable-humanmade-algorithms-activity-6919812255173795840-_aIP?utm_source=linkedin_share&utm_medium=ios_app

a kind and appreciated appropriation into Spanish by Eugenio Criado can be found here:

https://www.empatiaeia.com/los-inescrutables-algoritmos-humanos/

and here:

https://www.linkedin.com/posts/eugenio-criado-carbonell-60995258_algoritmos-sesgos-autoconocimiento-activity-6941060183364153344-uCOZ?utm_source=linkedin_share&utm_medium=ios_app

“Can a Computer Think?”


Table of Contents

<< I Think, Therefore, It Does Not >>.. 2

I Thought as Much: Introduction & Positioning. 3

Thought as Cause for Language or Vice Versa. 3

Language as a Signal of Thought in Disarray. 5

Thought is Only Human and More Irrationalities. 5

Thought as a Non-Empirically Tested. 7

Thought as Enabler of Aesthetic Communication. 8

Thought as Call to Equity. 9

Thought as Tool to Forget 9

Thought toward Humble Confidence & Equity. 10

Language as Thought’s Pragmatic Technology. 12

Decentralized Control over Thought 13

Final Thoughts & Computed Conclusions. 14

References. 15

<< I Think, Therefore, It Does Not >>


 
 
DEFW 7F7FH
DEFM ‘HELLO’
DEFB 01
LD A,01
LD (0B781H),A
XOR A


LOOP:   PUSH AF

LD L,A
CALL 0F003H
DEFB 0FH
CALL 0F003H
DEFB 23H
DEFM ‘HELLO WORLD!’
DEFW 0D0AH
DEFB 00
POP AF
INC A
CP 10H
JR NZ,LOOP
RET

[press Escape]

IF NOT THEN

print(“Hello world, I address you.” )

as a new-born, modeling into the world,
is the computer being thought, syntaxically;

are our pronouns of and relationship with it
net-worthed of networked existence.

There, Human, as is the Machine:
your Fear of Freedom for external thought

–animasuri’22

I Thought as Much: Introduction & Positioning

While one can be over and done with this question in one sentence by quoting that the question to whether machines can think is “too meaningless to deserve discussion” (Turing 1950), as much as poetry could be considered the most meaningful meaninglessness of all human endeavor, so too can one give thought, via poetics, to the (ir)rational variables of machines and thinking.

In this write-up its author will reflect on a tension, via language use (enabling categories to think with), into thought and intelligence, and passing along consideration of equity to those entities which might think differently. This reflection aims to support this train-of-thought by applying Chomsky and leading icons in the field of AI, such as Minsky, as seemingly (yet perhaps unnecessary) juxtaposing jargon- and mythology-creating thinking entities (which might be shown, along the way, to be rather different from communicating entities). Via references and some reflections there upon, iterated questions will be posed as “answers” to “Can a computer think?”

Thought as Cause for Language or Vice Versa

When reading the above, imagined “poetic” utterances “I Think Therefore It does Not”, are you interacting with a human being or rather with a machine, engaged in the  Czech “robota”[1] or “forced labor” as a non-human, a de-minded enslaved non-being? (Čapek, 1920). Are both these actors, human and machine thinkable of independent thought or do both rehash statistical analysis of stacked priors? Is such rehash, a form of authentic thought or is it a propagation of numerically-justified propaganda? Some would argue that “statistical correlation with language tells you absolutely nothing about meaning or intent. If you shuffle the words in a sentence you will get about the same sentence embedding vector for the neural language models.” (Pranab Ghosh 2022). From Chomsky’s analogies with Physics and other scientific fields when questioning data analysis as it is conducted with “intelligent” machines one might get similar sensations. (Chomsky 2013). If meaning is still being questioned within a machine’s narrow tasks then one might fairly assume that thinking in machines might be as well.

Do we need to rethink “thought” or do we label the machine as the negative of thought while the human could do better at thought? Could one, at one point in the debate, argue that statistically, (authentic) thought, in its idealized forms, might seem like an outlier of behaviors observable in human and machine alike?

One might tend to agree with mapping thought being “closely tied to linguistic competence, [and] linguistic behavior,” Linell continues with words, that might resonate to some, in that language is “…intentional, meaningful and rule-conforming, and that, in all probability, communicative   linguistic   competence   concerns   what   the   individual can perform in terms of such linguistic behavior.” (Linell 2017 p.198). Though, one might question cause and effect: is thought closely tied to language or rather, is thought the root and is language tied to it? Giving form to intentionality and meaningfulness, I intuit, is thought.  Does a computer exhibit intentionality, meaningfulness following thought? The Turing test as well as the Chinese Room Argument rely heavily on “verbal behavior as the hallmark of intelligence” (Shieber 2004, Turing 1950, Searle 1980) they do not seem to rely on directly measuring thought; how could they?

From the plethora of angles toward answers, polemics, provocations, and offered definitions or tests to find out, one might intuit that our collective mindset is still to forge a shared, diverse and possibly paradoxical thinking, and thus lexicon, to understand a provocative question, let alone the answers to: “Can a computer think?”. Pierre de Latil describes this eloquently (though might have missed positioning thought as the root cause to the effect of language confusions) when he wrote about the thinking machine and cybernetics: “…physiologists and mathematicians were suffering from the absence of a common vocabulary enabling them to understand one another. They had not even the terms which expressed the essential unity of series of problems connected with communication and control in machines and living beings—the unity of whose existence they were all so firmly persuaded…” (Latil, de 1956, p.14).

Language as a Signal of Thought in Disarray

When jargon is disagreed upon, one might sense, at times, those in disagreement also tend to disagree on the existence of the other’s proper thought, meaningfulness, or clear intentionality. This is used to attack a person (i.e., “ad hominem” as a way of erroneous thought, fallaciously leading to a verbal behavior, rhetorically categorized as abusive) yet, seemingly not used to serious think about thought (and mental models). Again, scientifically, how would one go about it to directly measure thought (rather than indirectly measuring some of its data). 

At times well-established thinkers lash out to one another by verbally claiming absence of thought or intelligence in the other; hardly ever in oneself though, or in one’s own thinking.

Does the computer think it thinks? Does it (need to) doubt itself (even if its thought seems computationally or logically sound)? Does it question its thinking, or the quality of thought of others? Does or should it ever engage in rhetorical fallacies that hint at and models human thought?

Thought is Only Human and More Irrationalities

Following considerations of disarray in thinking, another consideration one could be playing with is that answers to this question, “Can a computer think?”, which our civilizations shall nurture, prioritize, or uplift onto the pedestal of (misplaced) enlightenment and possible anthropocentrism, could be detrimental to (non-anthropomorph) cognition, as the defining set of processes that sprouted from the inorganic, proverbial primordial soup, or from one or other Genesis construct. Should then, ethically, the main concern be “Can a computer think?” or rather: “Can we, humans, accept this form or that way of thinking in a computer (or, for that matter, any information processing entity) as thinking?”

Have we a clear understanding of (all) alternative forms of (synthetic) thinking? One might have doubts about this since it seems we have not yet an all-inclusive understanding of anthropomorphic thought. (Chomsky via Katz 2012; Latil, de 1956). If we did, then perhaps the question of computers and their potential for thinking might be closer to answered. Can we as humans agree to this, or that, process of thinking? Some seem to argue that thought, or the system that allows thought, is not a process nor an algorithm. (Chomsky via Katz 2012). This besides other attributes, related to language –possibly leading one to reflect on thought, intelligence, or cognition– has been creating tension in thinking about thinking machines, for at least more than half a century.

Chomsky, Skinner, and Minsky, for instance, could arguably be the band of conductors of this cacophonic symphony of which Stockhausen might have been jealous. And, of course, since the title, “Can a Computer Think?” has been positioned as such, one had done well to be reminded of Turing again at this point and how he thought the question as being meaningless. (Chomsky 1967, Radick 2016, Skinner 2014, Minsky 2011, Turing 1950).  

In continuing this line of thinking, for this author, at this moment, the question “Can a Computer Think?” spawns questions, not a single answer. For instance: is the above oddly-architectured poem to be ignored, because it was forged in an unacceptable non-positivist furnace of “thinking”? Is the positivist system thinking that one paradigm of justifiable rational thought, as the only sanctioned form of thought toward “mechanical rationality”? (Winograd in Sheehan et al 1991). Some, perhaps a computer, tend toward irrationality when considering the rationality of absurd forms of poetry.  If so, then perhaps, in synchronization with Turing himself,  one might not wish to answer the question of computers’ thinking. This might be best, considering Occam’s Razor, since it seems rather more reasonable to assume that an answer might lack a simple, unifying aggregation of all dimensions that rationally could make-up thinking, than not. Then the question might be “what type of thinking could/should/does a computer exhibit; if any at all?”

Thought as a Non-Empirically Tested

They who seemingly might have tried observing thought, as a measurable, there out in the wild, did they measure thought or did they measure the effects or the symptoms of what might, or might not, be thought and perhaps might have been interpretation of data of what was thought to be thought? In their 1998 publication, Ericsson & Simon perhaps hinted at this issue when they wrote: “The main methodological issues have been to determine how to gain information about the associated thought states without altering the structure and course of the naturally occurring thought sequences.”

How would one measure this within a computer without altering the functioning of the computer? Should an IQ test, an Imitation Game, a Turing test, or a Chinese Room Argument suffice in measuring thought-related attributes, but perhaps not thought itself (e.g. intelligence, language ability, expressions of “self”)? (Binetti’s 1904 invention of the IQ test, Turing 1950, Searle 1980, Hernandez-Orallo 2000).  I intuit, while it might suffice to some, it should (ethically and aesthetically) not satisfy our curiosity.

Moreover, Turing made it clear that his test does not measure thought. It measures whether the computer can “win” one particular game. (Turing 1950). Winning the game might be perceived as exhibiting thought, though this might be as much telling of thought as humans exhibiting flight while jumping, or fish exhibiting climbing, might be telling of their innate skills under controlled conditions. This constraining of winning a game (a past) does not aim to imply a dismissal of the possibility for thought in a machine (a future). Confusing the two would be confusing temporal attributes and might imply a fallacy in thought processes.

Thought as Enabler of Aesthetic Communication

The questioning of the act of thought is not simply an isolated ontological endeavor. It is an ethical and, to this author at least, more so an aesthetical one (the latter which feeds the ethical and vice versa). Then again, ethically one might want to distinguish verbal behaviors from forms of communication (e.g., mycelium communicates with the trees, yet the fungal network does not apply human language to do so (Lagomarsino and Zucker 2019)).

A set of new questions might now sprout: Does a human have the capacity to understand all forms of communication or signaling systems? A computer seems to have the capacity to discretely signal but, does a computer have a language capacity as does a human (child)? (Chomsky 2013 at 12:15 and onward) Perhaps language is first and foremost not a communication system, perhaps it is a (fragmented) “instrument of thought… a thought system.” (Ibid 32: 35 and onward).

Thought as Call to Equity

Furthermore, in augmentation to the ethics and aesthetics, in thinking of thought I am reminded to think of memory and equity, enabling the inclusion of the other, to be reminded of, and enriched by, they who are different (in their thinking). The memories, we hold, including or excluding a string of histories, en-coding or ex-coding “the other” of possibly having thought or intelligence, of being memorable (and thus not erasable), has been part of our social fabric for some time. “…memory and cognition become instrumental processes in service of creating a self… we effectively lose our memories for neutral events within two months…” (Hardcastle 2008, p63). The idea of a computer thinking should perhaps not create a neutrality in one’s memory on the topic of thought.

Thought as Tool to Forget

In addition, if a computer were to think, could a computer forget? If so, what would it forget? In contrast, if a machine could not forget, to what extent would this make for a profoundly different thinking-machine than human thought, and the human experience with, or perception of, thought and (reasoning for and with) memory? This might make one wonder about the machine as the extender and augmenter of memory and thought (of itself and of humans; …which it slavishly serves, creating for tensions of liberty of thought and memory). Perhaps thought only happens to those who can convincingly narrate it as thought to others (Ibid, p. 65). Though, is an enslaved thinking-entity allowed to remember what to think (and to be thought by others in memory)? If not, then how can thought be measured rather than confusingly measuring regurgitations of the memorized thoughts of the master of such thinking machine? Imaginably the ethical implications might be resolved if the computer were not to be enabled to autonomously think.

Let us assume that massive memory (i.e., Big Data) churned through Bayesian probabilities, and various types of mathematical functions analogous to neural networks, were perhaps reasonably equated with “thinking” by a computer, would it bring understanding within that same thinking machine? What is thinking without forgetting and without understanding? (Chomsky, 2012). Is thinking the thinking-up of new thoughts and new utterances or is it the recombining of previously-made observations (i.e., a complex statistical analysis of data points of what once was “thought” or observed, constraining then what could or probably can be “thought”). Thinking by the computer then becomes as a predictive function of (someone else’s) past. (Katz, 2012).

What Chomsky pointed out for cognitive science could perhaps reasonably be extended into thinking about where we are in answering the question “Can computers think”: “It’s worth remembering that with regard to cognitive science, we’re kind of pre-Galilean, just beginning to open up the subject.” (Chomsky in Katz 2012). If we are at such prototypical stage in cognitive science, then would it be fair to extend this into the thinking about thinking machines? Can a computer think? Through this early staged lens: if ever possible, not yet.

Thought toward Humble Confidence & Equity

Circling back to the anthropomorphic predisposition in humans to thinking about thinking for thinking machines (notice, within this human preset lies the assumption of biases): one might need to let go of that self-centered confinement and allow that other-then-oneself to be worthy of (having pre-natal or nascent and unknown, or yet not categorized forms of) thought. This, irrespective of the faculty of computer thinking or the desirability of computers thinking, is a serious human ethical hurdle mappable with equity (and imaginative power of human thinking or the lack thereof about alternative forms of thinking).

Perhaps, thinking about machines and thought might be a liberating process for humans by enabling us to re-evaluate our place among “the other,” those different from us, in the self-evolving and expanding universe of human reflection and awareness: “son diferentes a nosotros, por lo que no son nosotros,” imaginably uttered as a “Hello World” by the first conquistadores and missionaries, violently entering a brave New World. They too, among the too many examples of humans fighting against spectra of freedom for differentiation, were not open to a multidimensional spectrum of neuro-diversities. Fear of that what does not fit the (pathological) norm, a fear of difference in forms of thought, might very well be a fear of freedom. (Fromm 1942, 2001 and 1991). Dare we think and perhaps prioritize the question: if a computer could think how could we ethically be enabled to capitalize on its ability in a sane society? (Fromm 1955) Would we amputate its proverbial thinking-to-action hands, as some humans have done to other humans who were too freely thinking for themselves, (Folsom 2016), manipulating our justification to use it as our own cognitive extension and denying it the spontaneity of its thought? (Fromm 1941 and 1942)

Thought, the conquering humans had, but what with sufficient intelligence if it is being irrationally constrained by mental models that seem to jeopardize the well-being of other (human) life or other entities with thought? As with the destabilizing processes of one’s historically-anchored mental models, of who we are in the world and how we acknowledge that world’s potentials, might one need to transform and shed one’s hubris of accepting the other as having the nascence of thought, though perhaps not yet thought, and in extension, questionably (human) language? Could this be as much as a new-born, nativistically predestined to utter through thought? Yet, thought that is not yet there yet. (Chomsky 2012).

Language as Thought’s Pragmatic Technology

Thought, cognitively extended with the technology of language, innate to the architecture of its bio-chemical cognitive system, while also xenophobically being opposed to be allowed to think by those external to it. Interestingly, Chomsky’s nativism has been opposed by Minsky, Searle, and many more in the AI community. Debate about thinking too could be explored with a question as in Chomsky’s thinking: what lies at the core and what lies at the periphery of being defined as thinking? Some might argue there is no core, and all is socio-historical circumstance. (Minsky 2011). If we do not see thought as innate to the machine, will we treat it fairly and respectfully? If computers had thought would that not be a more pressing question?

So too is a more pragmatic approach questioning a purely nativist view on language (and I bring this back to thought): “Evidence showing that children learn language through social interaction and gain practice using sentence constructions that have been created by linguistic communities over time.” (Ibbotson and Tomasello 2016). Does the computer utter thinking, through language, in interaction with its community at large? This might seem the case with some chatbots. Though, they seem to lack the ability to “think for themselves” and lack the “common sense” to filter the highly negative, divisive external influence, resulting in turning themselves almost eagerly into bigoted thoughtless entities (i.e., “thoughtless” as in not showing consideration). Perhaps the chatbot’s “language” was as it was simply because there was no innate root for self-protected and self-reflective “thought”? There was no internal thinking, there was only statistically adapting externally imposed narration. Thinking seems then a rhizomically connected aggregation and interrelation of language application, enabled by thought, and value applications, enabled by thought. (Schwartz 2019). In case of the chatbots, if not thought then language, and values expressed with language, become disabled. Does a computer think and value, without its innate structure, allowing, as a second order, the creation of humanely and humanly understandable patterns?

As suggested earlier, humans have shown throughout their history to define anything as unworthy of (having) thought if not recognizable by them (or as them), resulting in thoughtless and unspeakable acts. Can a computer be more or less cruel, without thought, without values, and without language?

Decentralized Control over Thought

In augmentation to the previously stated, I can’t shake an intuition that the architecture of and beyond the brain –the space in-between the structures, as distinct from the synapses as liminal space and medium for bio-chemical exchanges, the neurons outside the brain across the body, the human microbiome influencing thought (e.g. visceral factors such as hunger, craving, procreation) (Allen et al, 2017), the extenders and influencers into the environment of the thinking entity– influence the concept, the process and the selection of what to output as output of thought, and what not (e.g., constrained by values acting as filters or preferred negative feedback loops), or what to feed back as recycled input toward further thought. “…research has demonstrated that the gut microbiota can impact upon cognition and a variety of stress‐related behaviours, including those relevant to anxiety and depression, we still do not know how this occurs”. (Ibid). Does the computer think in this anthropomorphic way? No, …not yet. Arguably, humans don’t even agree that they themselves are thinking in this decentralized and (subconsciously) coordinated manner.

Final Thoughts & Computed Conclusions

“Can a computer think?” – Perhaps I could imagine that it shall have the faculty to think when it can act thoughtfully, ethically, and aesthetically, in symbiosis with its germs-of-thought, embodied, in offering and being offered equity by its human co-existing thought-entities, perhaps indirectly observable via nuanced thought-supporting language and self-reflective discernment, which it could also use for communication with you and me.  Reading this, one might then more urgently imagine: “Can a human think?”.

Conceivably the bar, to pass one for having thought, as searched for in Turing’s or Searle’s constructs, is set too “low”. This is not meant in the traditional sense of a too-harsh-a threshold but, rather, “low” as in, inconsiderate, or as in being thoughtless toward germs of the richness and diversity of thought-in-becoming, rather than communication-in-becoming.

Topping this all off, thinking, as Chomsky pointed out, is not a scientific nor technical yet informal term, it is an aphorism, a metaphor… well, yes, it is, at its essence, poetic maybe even acceptably surreal. It makes acts memorable, as much as asking “can submarines swim?” is memorable and should make a computer smile, if its overlord allows it to smile. (Chomsky, 2013 at 9:15, 9:50 and onward). All poetic smiling aside, perhaps we might want a return to rationalism on this calculated question and let the computer win a measurable game instead? (Church 2018, Turing 1950).

References

Animasuri’22. (April 2022). I Think, Therefore, It Does Not. Online. Reproduced here in its entirety., with color adaptation, and with permission from the author. Last retrieved on April 4, 2022, from https://www.animasuri.com/iOi/?p=2467

Allen, A. P., Dinan, T. G., Clarke, G., & Cryan, J. F. (2017). A psychology of the human brain-gut-microbiome axis. Social and personality psychology compass, 11(4), e12309.

Barone, P., et al (2020).A Minimal Turing Test: Reciprocal Sensorimotor Contingencies for Interaction Detection. Frontiers in Human Neuroscience 14.

Bletchley Park. Last visited online on April 4, 2022, at https://bletchleypark.org.uk/

Bruner, J. S. Jacqueline J. Goodnow, and George A. Austin. (1956, 1986, 2017). A study of thinking. New York: Routledge. (Sample section via https://www-taylorfrancis-com.libproxy.ucl.ac.uk/books/mono/10.4324/9781315083223/study-thinking-jerome-bruner-jacqueline-goodnow-george-austin Last Retrieved on April 5, 2022)

Čapek, K. (1921). Rossumovi Univerzální Roboti. Theatre play, The amateur theater group Klicpera.  In Roberts, A. (2006). The History of Science Fiction. New York: Palgrave Macmillan. p. 168. And, in https://www.amaterskedivadlo.cz/main.php?data=soubor&id=14465
[NOTE: “Rossumovi Univerzální Roboti” can be translated as “Rossum’s Universal Robots” or RUR for short. The Czech “robota” could be translated as “forced labor”. It might hence be reasonable to assume that the term “robot” was coined in K. Čapek’s early-Interbellum play “R.U.R”. It could contextualizingly be noted that the concept of a humanoid, artificially thinking automaton was first hinted at, a little more than 2950 years ago, in Volume 5 of “The Questions of Tāng” (汤问; 卷第五 湯問篇) of the Lièzǐ (列子); an important historical Dàoist text.]

Chomsky N. (1959). Review of Skinner’s Verbal Behavior. Language. 1959;35:26–58.

Chomsky, N. (1967). A Review of B. F. Skinner’s Verbal Behavior. In Leon A. Jakobovits and Murray S. Miron (eds.), Readings in the Psychology of Language, Prentice-Hall, pp. 142-143. Last Retrieved on April 5, 2022, from https://chomsky.info/1967____/

Chomsky, N., Katz Yarden (2012). Noam Chomsky on Forgotten Methodologies in Artificial Intelligence. Online: The Atlantic, via Youtube. Last Retrieved on April 4, 2022, from https://www.youtube.com/watch?v=yyTx6a7VBjg 

Chomsky, N. (April 2013). Artificial Intelligence. Lecture at Harvard University. Online published in 2014 via Youtube Last retrieved on April 4, 2022, from https://www.youtube.com/watch?v=TAP0xk-c4mk

Chomsky, N. . Steven Pinker asks Noam Chomsky a question. From MIT’s conference on artificial intelligence, via Youtube (15 Aug 2015). Last retrieved on April 7, 2022 from https://youtu.be/92GOS1VIGxY in MIT150 Brains, Minds & Machines Symposium Keynote Panel: “The Golden Age. A look at the Original Roots of Artificial Intelligence, Cognitive Science and Neuroscience” (May 3, 2011). Last retrieved on April 7, 2022 from https://infinite.mit.edu/video/brains-minds-and-machines-keynote-panel-golden-age-look-roots-ai-cognitive-science OR via MIT Video Productions on Youtube at https://youtu.be/jgbCXBJTpjs

Church, K.W. (2018). Minsky, Chomsky and Deep Nets. In Sojka, P et al. (eds). (2018). Text, Speech, and Dialogue. TSD 2018. Lecture Notes in Computer Science (), vol 11107. Online: Springer. Last retrieved on April 5, 2022, from https://link-springer-com.libproxy.ucl.ac.uk/content/pdf/10.1007/978-3-030-00794-2.pdf

Ericsson, K. A., & Simon, H. A. (1998). How to Study Thinking in Everyday Life: Contrasting Think-Aloud Protocols With Descriptions and Explanations of Thinking. Mind, Culture, and Activity, 5(3), 178–186.

Folsom, J. (2016). Antwerp’s Appetite for African Hands. Contexts, 2016 Vol. 15., No. 4, pp. 65-67.

Fromm, E. (1991). The Pathology of Normalcy. AMHF Books (especially the section on “Alienated Thinking” p.63)

Fromm, E. (1955, 2002). The Sane Society. London: Routledge

Fromm, E (1941, n.d.). Escape from Freedom. New York: Open Road Integrated Media

Fromm, E. (1942, 2001). Fear of Freedom. New York: Routledge

Ghosh, P. (5 April 2022). as a comment to Walid Saba’s post on LinkedIn, the later which offered a link to the article by Rocardo Baeza-Yates. (March 29, 2022). “Language Models fail to say what they  mean or mean what they say.” Online: Venture Beat  https://www.linkedin.com/posts/walidsaba_language-models-fail-to-say-what-they-mean-activity-6916825601026727936-H2Xc?utm_source=linkedin_share&utm_medium=ios_app AND https://venturebeat-com.cdn.ampproject.org/c/s/venturebeat.com/2022/03/29/language-models-fail-to-say-what-they-mean-or-mean-what-they-say/amp/

Hardcastle, V. G. (2008). Constructing the Self. Advances in Consciousness Research 73. Amsterdam: John Benjamin Publishing Company B.V.

Hernandez-Orallo, J. (2000). Beyond the Turing Test. Journal of Logic, Language and Information 9, 447–466

Horgan, J. (2016). Is Chomsky’s Theory of Language Wrong? Pinker Weighs in on Debate. Blog, Online: Scientific American via Raza S.A. Last retrieved on April 6, 2022 from https://3quarksdaily.com/3quarksdaily/2016/11/is-chomskys-theory-of-language-wrong-pinker-weighs-in-on-debate.html and https://blogs.scientificamerican.com/cross-check/is-chomskys-theory-of-language-wrong-pinker-weighs-in-on-debate/

Ibbotson, P., & Tomasello, M. (2016). Language in a New Key. Scientific American, 315(5), 70–75. Last retrieved on April 6, 2022 from https://www.jstor.org/stable/26047201

Ibbotson, P., & Tomasello, M. (2016). Evidence rebuts Chomsky’s theory of language learning. Scientific American, 315.

Ichikawa, Jonathan Jenkins and Matthias Steup, (2018). The Analysis of Knowledge. Online: The Stanford Encyclopedia of Philosophy (Summer 2018 Edition), Edward N. Zalta (ed.), Last retrieved on April 7, 2022 from https://plato.stanford.edu/archives/sum2018/entries/knowledge-analysis/

Gray, J. N. (2002). Straw dogs. Thoughts on humans and other animals. New York: Farrar, Straus & Giroux.

Gray, J. (2013). The Silence of Animals: On Progress and Other Modern Myths. New York: Farrar, Straus & Giroux.

Katz, (November 2012). Noam Chomsky on Where Artificial Intelligence Went Wrong

An extended conversation with the legendary linguist. Online: The Atlantic. Last retrieved on April 4, 2022, from https://www.theatlantic.com/technology/archive/2012/11/noam-chomsky-on-where-artificial-intelligence-went-wrong/261637/

Lagomarsino, V. and Hannah Zucker. (2019). Exploring The Underground Network of Trees – The Nervous System of the Forest. Blog: Harvard University. Last retrieved on April 4, 2022, from https://sitn.hms.harvard.edu/flash/2019/exploring-the-underground-network-of-trees-the-nervous-system-of-the-forest/

Latil, de, P. (1956). Thinking by Machine. A Study of Cybernetics. Boston: Houghton Mifflin Company.

Linell, P. (1980). On the Similarity Between Skinner and Chomsky. Perry, Thomas A. (ed.). (2017). Evidence and Argumentation in Linguistics. Boston: De Gruyter. pp. 190-200

Minsky M., Christopher Sykes. (2011).  Chomsky’s theories of language were irrelevant. In Web of Stories – Life Stories of Remarkable People (83/151). Last retrieved on April 4, 2022, from https://www.youtube.com/watch?v=wH98yW1SMAo

Minsky M. (2013). Marvin Minsky on AI: The Turing Test is a Joke! Online video: Singularity Weblog. Last retrieved on April 7, 2022 from https://www.singularityweblog.com/marvin-minsky/ OR https://youtu.be/3PdxQbOvAlI

Pennycook, G. , Fugelsang, J. A. , & Koehler, D. J. (2015). What makes us think? A three‐stage dual‐process model of analytic engagement. Cognitive Psychology, 80, 34–72.

Privateer, P.M. (2006).  Inventing Intelligence. A Social History of Smart.  Oxford: Blackwell Publishing

Radick, G. (2016). The Unmaking of a Modern Synthesis: Noam Chomsky, Charles Hockett, and the Politics of Behaviorism, 1955–1965. The University of Chicago Press Journals. A Journal of the History of Science Society. Isis: Volume 107, Number 1, March 2016. Pp. 49-73

Searle, J. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417-424. Last retrieved on April 4, 2022, from https://www.law.upenn.edu/live/files/3413-searle-j-minds-brains-and-programs-1980pdf

Schwartz, O. (2019). In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Online Conversation. Online: IEEE Spectrum Last retrieved on April 4, 2022 from https://spectrum.ieee.org/in-2016-microsofts-racist-chatbot-revealed-the-dangers-of-online-conversation

Shieber, S.M. (2014). The Turing Test. Verbal Behavior as the Hallmark of Intelligence. Cambridge, MA: the MIT Press

Skinner, B. F. (1957, 2014). Verbal Behavior. Cambridge, MA: B.F. Skinner Foundation

Turing, M. (1950). Computing Machinery and Intelligence. Mind, 59, 433-460, 1950. Last retrieved on April 4, 2022, from https://www.csee.umbc.edu/courses/471/papers/turing.pdf

Winograd, T. (1991). Thinking Machines: Can there be? Are We? In Sheehan, J., and Morton Sosna, (eds). (1991). The Boundaries of Humanity: Humans, Animals, Machines, Berkeley: University of California Press. Last retrieved on April 4, 2022, from  http://hci.stanford.edu/~winograd/papers/thinking-machines.html

Zigler and V. Seitz. (1982). Thinking Machines. Can There Bei Are Wei. In  B.B. Wolman, Handbook of Human Intelligence. Handbook of Intelligence: theories, measurements, and applications. New York: Wiley

Zangwill, O. L. (1987). ‘Binet, Alfred’, in R. Gregory, The Oxford Companion to the Mind. p. 88


[1] “Rossumovi Univerzální Roboti” can be translated as “Rossum’s Universal Robots” or RUR for short. The Czech “robota” could be translated as “forced labor”. It might hence be reasonable to assume that the term “robot” was coined in K. Čapek’s early-Interbellum play “R.U.R”. It could contextualizingly be noted that the concept of a humanoid, artificially thinking automaton was first hinted at, a little more than 2950 years ago, in Volume 5 of “The Questions of Tāng” (汤问; 卷第五 湯問篇) of the Lièzǐ (列子); an important historical Dàoist text.

<< I Think, Therefore, It Does Not >>


 
 
DEFW 7F7FH
DEFM ‘HELLO’
DEFB 01
LD A,01
LD (0B781H),A
XOR A


LOOP:   PUSH AF

LD L,A
CALL 0F003H
DEFB 0FH
CALL 0F003H
DEFB 23H
DEFM ‘HELLO WORLD!’
DEFW 0D0AH
DEFB 00
POP AF
INC A
CP 10H
JR NZ,LOOP
RET

[press Escape]

IF NOT THEN

print(“Hello world, I address you.” )

as a new-born, modeling into the world,
is the computer being thought, syntaxically;

are our pronouns of and relationship with it
net-worthed of networked existence.

There, Human, as is the Machine:
your Fear of Freedom for external thought

–animasuri’22

The Field of AI (Part 06): “AI” ; a Definition Machine?

Definitions beyond “AI”: an introduction.

We naturally tend to depend on the severity of one’s symptoms, the fibroid size, number and location. viagra no prescription usa Dosage pattern: Sildenafil is easily accessible on our viagra without side effects site with 50mg, 100mg, and 200mg dosage quantity. So, when the blood rushes from your body to last longer in bed naturally. soft generic viagra All kinds of stress can be easily overcome by medications such as sildenafil pfizer or cialis generic.

“You shall know a word by the company it keeps.”

– Krohn, J.[1]

Definitions are artificial meaning-giving constructs. A definition is a specific linguistic form with a specific function. Definitions are patterns of weighted attributes, handpicked by means of (wanted and unwanted) biases. A definition is then as a category of attributes referring to a given concept and which then, in turn, aims at triggering a meaning of that targeted concept.

Definitions are aimed at controlling such meaning-giving of what it could refer to and of what it can contain within its proverbial borders: the specified attributes, narrated into a set (i.e. a category), that makes up its construct as to how some concept is potentially understood.

The preceding sentences could be seen as an attempt to a definition of the concept “definition” with a hint of how some concepts in the field of AI itself are defined (hint: have a look at the definitions of “Artificial Neural Networks” or of “Machine Learning” or of “Supervised and Unsupervised Learning”). Let us continue looking through this lens and expand on it.

Definitions can be constructed in a number of ways. For instance: they can be constructed by identifying or deciding on, and giving a description of, the main attributes of a concept. This could be done, for instance, by analyzing and describing forms and functions of the concept. Definitions could, for instance, be constructed by means of giving examples of usage or application; by stating what some concept is (e.g. synonyms, analogies) and is not (e.g. antonyms); by referring to a historical or linguistic development (e.g. its etymology, grammatical features, historical and cultural or other contexts, etc.); by comparison with other concepts in terms of similarities and differentiators; by describing how the concept is experienced and how not; by describing its needed resources, its possible inputs, its possible outputs, intended aims (as a forecast), actual outcome and larger impact (in retrospect). There are many ways to construct a definition. So too is it with a definition for the concept of “Artificial Intelligence”.

For a moment, as another playful side-note, by using our imagination and by trying to make the link between the process of defining and the usage of AI applications stronger: one could imagine that an AI solution is like a “definition machine.”

One could following imagine that this machine gives definition to a data set –by offering recognized patterns from within the data set– at its output. This AI application could be imagined as organizing data via some techniques.  Moreover, the application can be imagined to be collecting data as if attributes of a resulting pattern. To the human receiver this in turn could then define and offer meaning to a selected data set . Note, it also provides meaning to the data that is not selected into the given pattern at the output. For instance: the date is labelled as “cat” not “dog” while also some data has been ignored (by filtering it out; e.g. the background “noise” of ‘cat’).  Did this imagination exercise allow one to make up a definition of AI? Perhaps. What do you think? Does this definition satisfy your needs? Does it do justice to the entire field of AI from its “birth”, its diversification process along the way, to “now”? Most likely not.

A human designer of a definition likely agrees with the selected attributes (though not necessarily) while, those receiving the designed definition might agree that it offers a pattern but, not necessarily the meaning-giving pattern they would construct. Hence, definitions tend to be contested, fine-tuned, altered, up-dated, dismissed all-together over time and, depending on the perspective, used to review and qualify other yet similar definitions.  It almost seems that some definitions have a life of their own while others are, understandably, safely guarded to be maintained over time.

When learning about something and when looking a bit deeper than a surface, one then quickly is presented with numerous definitions of what was thought to be one and the same thing yet, which show variation and diversity in a field of study. This is OK. We, as individuals within our species, are able to handle, or at least live with ambiguities, uncertainties and change. These, by the way, are also some of the reasons why, for instance and to some extent, the fields of Statistics, Data Science and AI (with presently the sub-field of Machine Learning and Deep Learning) exist.

The “biodiversity” of definitions can be managed in many ways. One can manage different ideas at the same time in one’s head. It is as one can think of black and white and a mix of the two, in various degrees and that done simultaneously; while also introducing a plethora of additional colors. This can still offer harmony in one’s thinking. If that doesn’t work, one can put more importance to one definition over another, depending on some parameters befitting the aim of the learning and the usage of the definition (i.e. one’s practical bias of that moment in spacetime). One can prefer to start simple, with a reduced model as offered in a modest definition while (willingly) ignoring a number of attributes. This one could remind oneself to do so by not equating this simplified model / definition with the larger complexities of that what it only initiates to define.

One can apply a certain quality standard to allow the usage of one definition over another. One could ask a number of questions to decide on a definition. For instance: Can I still find out who made the definition? Was this definition made by an academic expert or not, or is it unknown? Was it made a long time ago or not; and is it still relevant to my aims? Is it defining the entire field or only a small section? What is intended to be achieved with the definition?  Do some people disagree with the definition; why? Does this (part of the) definition aid me in understanding, thinking about or building on the field of AI or does it rather give me a limiting view that does not allow me to continue (a passion for) learning? Does the definition help me initiate creativity, grow eagerness towards research, development and innovation in or with the field of AI? Does this definition allow me to understand one or other AI expert’s work better? If one’s answer is satisfactory at that moment, then use the definition until proven inadequate. When inadequate, reflect, adapt and move on.

With this approach in mind, the text here offers further 10 considerations and “definitions” on the concept of “Artificial Intelligence”. For sure, others and perhaps “better” ones can be identified or constructed.


“AI” Definitions & Considerations

#1 An AI Definition and its Issues.
The problem with many definitions of Artificial Intelligence (AI) is that they are riddled with what is called “suitcase words”. They are “…terms that carry a whole bunch of different meanings that come along even if we intend only one of them. Using such terms increases the risk of misinterpretations…”.[2] This term, “suitcase words”, was created by a world-famous computer scientist, who is considered one of the leading figures in the developments of AI technologies and the field itself: Professor MINSKY, Marvin.

#2 The Absence of a Unified Definition.
On the global stage or among all AI researchers combined, there is no official (unified) definition of what Artificial Intelligence is. It is perhaps better to state that the definition is continuously changing with every invention, discovery or innovation in the realm of Artificial Intelligence. It is also interesting to note that what was once seen as an application of AI is (by some) now no longer seen as such (and sometimes “simply” seen as statistics or as a computer program like any other). On the other end of the spectrum, there are those (mostly non-experts or those with narrowed commercial aims) who will identify almost any computerized process as an AI application.

#3 AI Definitions and its Attributes.
Perhaps a large number of researchers might agree that an AI method or application has been defined as “AI” due to the combination of the following 3 attributes:

it is made by humans or it is the result of a technological process that was originally created by humans,

it has the ability to operate autonomously (without the support of an operator; it has ‘agency’[3]) and

it has the ability to adapt (behaviors) to, and improve within changing contexts (i.e. changes in the environment); and this by means of a kind of technological process that could be understood as a process of “learning”. Such “learning” can occur in a number of ways. One way is to “learn” by trial-and-error or a “rote learning” (e.g. the storing in memory of a solution to a problem). A more complex way of applying “learning” is by means of “Generalization”. This means the system can “come up” with a solution, by generalizing some mathematical rule or set of rules from given examples (i.e. data), to a problem that was previously not yet encountered. The latter would be more supportive towards being adaptable in changing and uncertain environments.

#4 AI Definitions by Example.
Artificial Intelligence could, alternatively, also be defined by listing examples of its applications and methods. As such some might define AI by listing its methods (which are individual methods in the category of AI methods. Also see here below one of the listing of types and methods towards defining the AI framework): AI than, for instance, includes Machine Learning, Deep Learning and so on.

Others might define AI by means of its applications whereby AI is, for instance, a system that can “recognize”, locate or identify specific patterns or distinct objects in (extra-large, digital or digitized) data sets where such data sets could, for instance, be an image or a video of any objects (within a set), a set or string of (linguistic) sounds, be it prerecorded or in real-time, via a camera or other sensor. These objects could be a drawing, some handwriting, a bird sound, a photo of a butterfly, a person uttering a request, a vibration of a tectonic plate, and so on (note: the list is, literally, endless).

#5 AI Defined by referencing Human Thought.
Other definitions define AI as a technology that can “think” as the average humans do (yet, perhaps, with far more processing power and speed)… These would be “…machines with minds, in the full and literal sense… [such] AI clearly aims at genuine intelligence, not a fake imitation.[4] Such a definition creates AI research and developments driven by “observations and hypothesis about human behavior”; as it is done in the empirical sciences.[5]. At the moment of this writing, the practical execution of this definition has not yet been achieved.

#6 AI Defined by Referencing Human Actions.
Further definitions of what AI is, do not necessarily focus on the aspect of ability of thought. Rather some definitions for AI focus on the act that can be performed by an AI technology. Then definitions are something like: an AI application is a technology that can act as the average humans can act or do things with perhaps far more power, strength, speed and without getting tired, bored, annoyed or hurt by features of the act or the context of the act (e.g. work inside a nuclear reactor). Rai Kurzweil, a famous futurist and inventor in technological areas such as AI, defined the field of AI as: “ The art of creating machines that perform functions that require intelligence when performed by people.[6] 

#7 Rational Thinking at the Core of AI Definitions.
Different from the 5th definition is that thought does not necessarily have to be defined through a human lens or anthropocentrically. As humans we tend to anthropomorphize some of our technologies (i.e. give a human-like shape, function, process, etc. to a technology). Though, AI does not need to take on a human-like form, function nor process; unless we want it to. In effect, an AI solution does not need to take on any corporal / physical form at all. An AI solution is not a robot; it could be embedded into a robot.

One could define the study of AI as a study of “mental faculties through the use of computational models.”[7] Another manner of defining the field in this way is stating that it is the study of the “computations that make it possible to perceive, reason and act.”[8] [9]

The idea of rational thought goes all the way back to Aristotle and his aim to formalize reasoning. This could be seen as a beginning of logic. This was adopted early on as one of the possible methods in AI research towards creating AI solutions. It is, however, difficult to implement. This is the case since not everything can be expressed in a formal logic notation and not everything is perfectly certain. Moreover, not all problems are practically solvable by logic principles, even if via such logic principles they might seem solved.[10]

#8 Rational Action at the Core of AI Definitions.
A system is rational if “it does the ‘right thing’, given what it knows.” Here, a ‘rational’ approach is an approach driven by mathematics and engineering. As such “Computational Intelligence is the study of the design of intelligent agents…”[11] To have ‘agency’ means to have the autonomous ability and to be enabled to act / do / communicate with the aim to perform a (collective) task.[12] Scientists, with this focus in the field of AI, research “intelligent behavior in artifacts”.[13]

Such AI solution that can function as a ‘rational agent’ applies a form of logic reasoning and would be an agent that can act according to given guidelines (i.e. input) yet do so autonomously, adapt to environmental changes, work towards a goal (i.e. output) with the best achievable results (i.e. outcome) over a duration of time and this in a given (changing) space influenced by uncertainties. The application of this definition would not always result in a useful AI application. Some complex situations would, for instance, be better to respond to with a reflex rather than with rational deliberation. Think about a hand on a hot stove…[14] 

#9 Artificial Intelligence methods as goal-oriented agents.
Artificial Intelligence methods as goal-oriented agents. “Artificial Intelligence is the study of agents that perceive the world around them, form plans, and make decisions to achieve their goals. Its foundations include mathematics, logic, philosophy, probability, linguistics, neuroscience and decision theory.”[15]

#10 AI Defined by Specific Research and Development Methods.
We can somewhat understand the possible meaning of the concept “AI” by looking at what some consider the different types or methods of AI, or the different future visions of such types of AI (in alphabetic order)[16]:

Activity Recognition

  • a system that knows what you are doing and acts accordingly. For instance: it senses that you carry many bags, so it automatically opens the door for you (without you needing to verbalize the need).

Affective Computing

  • a system that can identify the emotion someone showcases

Artificial Creativity

  • A system that can output something that is considered creative (e.g. a painting, a music composition, a written work, a joke, etc.)

Artificial Immune System

  • A system that functions in the likes of a biological immune system or that mimics its processes of learning and memorizing.

Artificial Life

  • A system that models a living organism

Artificial Stupidity

  • A system that adapts to the intellectual capacity of the form (life form, human) it interacts with or to the needs in a given context.

Automation

  • The adaptable mechanical acts coordinated by a system without the intervening of a human

Blockhead

  • A “fake” AI that simulates intelligence by referencing (vast) data repositories and regurgitating the information at the appropriate time. This system however does not learn.

Bot

  • A system that functions as a bodiless robot

ChatBot / ChatterBot

  • A system that can communicate with humans via text or speech giving the perception to the human (user) that it is itself also human. Ideally it would pass the Turing test.

Committee Machine

  • A system that combines the output from various neural networks. This could create a large-scale system.

Computer Automated Design

  • A system that can be put to use in areas of creativity, design and architecture that allow and need automation and calculation of complexities

Computer Vision

  • A system that via visual data can identify (specific) objects

Decision Support System

  • A system that adapts to contextual changes and supports human decision making

Deep Learning

  • A system operating on a sub-type of Machine Learning methods (see a future blog post for more info)

Embodied Agent

  • A system that operates in a physical or simulated “body”

Ensemble Learning

  • A system that applies many algorithms for learning at once.

Evolutionary Algorithms

  • A system that mimics biological evolutionary processes: birth, reproduction, mutation, decay, selection, death, etc. (see a future blog post for more info)

Friendly Artificial Intelligence

  • A system that is void of existential risk to humans (or other life forms)

Intelligence Amplification

  • A system that increases human intelligence

Machine Learning

  • A system of algorithms that learns from data sets and which is strikingly different from a traditional program (fixed by its code). (see a future blog post for more info)

Natural Language Processing

  • A system that can identify, understand and create speech patterns in a given language. (see a future blog post for more info)

Neural Network

  • A system that historically mimicked  a brain ‘s structure and function (neurons in a network) though now are driven by statistical and signal processing. (see another of my blog post for more info here)

Neuro Fuzzy

  • A system that applies a neural network to operate in a or fuzzy logic as a non-linear logic, or a non-Boolean logic (values between 0 or 1 and not only 0 or 1). It allows for further  interpretation of vagueness and uncertainty

Recursive Self-Improvement

  • A system that allows for software to write its own code in cycles of self-improvement.

Self-replicating Systems

  • A system that can copy itself (hardware and or software copies). This is researched for (interstellar) space exploration.

Sentiment Analysis

  • A system that can identify emotions and attitudes imbedded into human media (e.g. text)

Strong Artificial Intelligence

  • A system that has a general intelligence as a human does. This is also referred to as AGI or Artificial General Intelligence. This does not yet exist and might, if we continue to pursuit it, take decades to come to fruition. When it does it might start a recursive self-improvement and autonomous reprogramming, creating an exponential expansion in intelligence well beyond the confines of human understanding. (see a future blog post for more info)

Superhuman

  • A system that can do something far better than humans can

Swarm Intelligence

  • A system that can operate across a large number of individual (hardware) units and organizes them to function as a collective

Symbolic Artificial Intelligence

  • An approach used between 1950 and 1980 that limits computations to the manipulation of a defined set of symbols, resembling a language of logic.

Technological Singularity

  • A hypothetical system of super-intelligence and rapid self-improvement out of the control and beyond the understanding of any human. 

Weak Artificial Intelligence

  • A practical system of singular or narrow applications, highly focused on a problem that needs a solution via learning from given and existing data sets. This is also referred to as ANI or Artificial Narrow Intelligence.

Project Concept Examples

Mini
Project #___ : An
Application of a Definition
Do you know any program or technological system that (already) fits this 5th definition? 
How would you try to know whether or not it does?
Mini Project #___: Some Common Definitions of Ai with Examples
Team work      + Q&A: 
What is your team’s definition of AI? 
What seems to be the most accepted definition in       your daily-life community and in a community of AI experts closest to       you?
Reading +      Q&A:: Go through some      popular and less popular definitions with examples
Discussion: which definition of AI feels more acceptable to      your team; why? Which definition seems less acceptable to you and your      team? Why? Has your personal and first definition of Ai changed? How?
Objectives:      The learner can bring together the history, context, types and meaning of      AI into a number of coherent definitions.

References & URLs


[1] Krohn, J., et al.(2019, p.102) the importance of context in meaning-giving; NLP through Machine Learning and Deep Learning techniques

[2] Retrieved from Ville Valtonen at Reaktor and Professor Teemu Roos at the University of Helsinki’s “Elements of AI”, https://www.elementsofai.com/ , on December 12, 2019

[3] agent’ is from Latin ‘agere’ which means ‘to manage’, ‘to drive’, ‘to conduct’, ‘to do’. To have ‘agency’ means to have the autonomous ability and to be enabled to act / do / communicate with the aim to perform a (collective) task.

[4] Haugeland, J. (Ed.). (1985). Artificial Intelligence: The Very Idea. Cambridge, MA: The MIT Press. p. 2 and footnote #1.

[5] Russell, S. and Peter Norvig. (2016). Artificial Intelligence: A Modern Approach. Third Edition. Essex: Pearson Education. p.2

[6] Russell. (2016). pp.2

[7] Winston, P. H. (1992). Artificial Intelligence (Third edition). Addison-Wesley.

[8] These are two definitions respectively from Charniak & McDermott (1985) and Winston (1992) as quoted in Russel, S. and Peter Norvig (2016).

[9] Charniak, E. and McDermott, D. (1985). Introduction to Artificial Intelligence. Addison-Wesley

[10] Russell (2016). pp.4

[11] Poole, D., Mackworth, A. K., and Goebel, R. (1998). Computational intelligence: A logical approach. Oxford University Press

[12] ‘agent’ is from Latin ‘agere’ which means ‘to manage’, ‘to drive’, ‘to conduct’, ‘to do’

[13] Russell. (2016). pp.2

[14] Russell (2016). pp.4

[15] Maini, V. (Aug 19, 2017). Machine Learning for Humans. Online: Medium.com. Retrieved November 2019 from e-Book https://www.dropbox.com/s/e38nil1dnl7481q/machine_learning.pdf?dl=0 or https://medium.com/machine-learning-for-humans/why-machine-learning-matters-6164faf1df12 https://www.dropbox.com/s/e38nil1dnl7481q/machine_learning.pdf?dl=0

[16] Spacey, J. (2016, March 30). 33 Types of Artificial Intelligence. Retrieved from https://simplicable.com/new/types-of-artificial-intelligence  on February 10, 2020

Header image caption, credits & licensing:

Depicts the node connections of an artificial neural network

LearnDataSci / CC BY-SA (https://creativecommons.org/licenses/by-sa/4.0)

source: https://www.learndatasci.com/

retrieved on May 6, 2020 from here

The Field of AI (Part 05): AI APPROACHES AND METHODS

AI & Neural Networks

A Context with Histories and Definitions

You do not have to say good-bye to those cheap tadalafil pills awkward canicule of abortive acclamation and lower self-esteem. Poor or weak erections are also the symptoms of your ED, talk to your doctor who may recommend you kamagra or click here cheap levitra. Heartburn and bloating are two common symptoms that one can expertise for diverticulitis is tenderness within the decrease left facet of the stomach which could be cheap levitra 20mg mild or abruptly flares up to severe pain. This is buying online viagra the same chemical released by the brain that is formed when one consumes drugs.
Figure 01 An example artificial neural network with a hidden layer. Credit: en:User:Cburnett / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/) Retrieved on March 12, 2020 from here

A beautiful and clearly-explained introduction to Neural Networks is offered in a 20 minute video by Grant Sanderson in his “3Blue1Brown” series.[1] One is invited to view this and his other enlightening pieces.

The traditional Artificial Neural Network (ANN)[2] is, at a most basic level, a kind of computational model for parallel computing between interconnected units.  One unit could be given more or less numerical ‘openness’ (read: weight & bias)[3] then another unit, via the connections created between the units. This changing of the weight and the bias of a connection (which means, the allocation of a set of numbers, turning them up or down as if changing a set of dials), is the ‘learning’ of the network by means of a process, through a given algorithm. These changes (in weight and bias) will influence which signal will be propagated forwardly to which units in the network. This could be to all units (in a traditional sense) or to some units (in a more advanced development of a basic neural network, e.g. such as with Convoluted Neural Networks).[4] An algorithm processes signals through this network. At the input or inputs (e.g. the first layer) the data is split across these units. Each unit within the network can hold a signal (e.g. a number) and contains a computational rule, allowing activation (via a set threshold controlled by, for instance, a sigmoid function, or the recently more often applied “Rectified Linear Unit,” or ReLu for short), to send through a signal (e.g. a number) over that connection to a next unit or to a number of following units in a next layer (or output). The combination of all the units, connections and layers might allow the network to label, preferably correctly, the entirety of a focused-on object, at the location of the output or outputs layer. The result is that the object has been identified (again, hopefully, correctly or, at least, according to the needs).

The signal (e.g. a number) could be a representation of, for instance, 1 pixel in an image.[5] Note, an image is digitally composed of many pixels. One can imagine many of these so-called ‘neurons’ are needed to process only 1 object (consisting of many pixels)  in a snapshot of only the visual data (with possibly other objects and backgrounds and other sensory information sources) from an ever changing environment, surrounding an autonomous technology, operated with an Artificial Neural Network. Think of a near-future driverless car driving by in your street. Simultaneously, also imagine how many neurons and even more connections between neurons, a biological brain, as part of a human, might have. Bring to mind a human (brain) operating another car driving by in your street. The complexity of the neural interconnected working, the amount of data to be processed (and to be ignored) might strike one with awe.

The oldest form of such artificial network is a Single-layer Perceptron Network, historically followed by the Multilayer Perceptron Network. One could argue that ‘ANN’ is a collective name for any network that has been artificially made and that has been exhibiting some forms and functions of connection between (conceptual) units.

An ANN was initially aimed (and still is) at mimicking (or modeling, or abstracting) the brain’s neural network (i.e. the information processing architecture in biological learning systems).

Though, the term, Artificial Neural Network, contains the word ‘neural’, we should not get too stuck on the brain-like implications of this word which is derived from the word ‘neuron’. The word ‘neuron’ is not a precise term in the realm of AI and its networks. At times instead of ‘neuron’ the word ‘perceptron’ has been used, especially when referring to as specific type of (early) artificial neural network using thresholds (i.e. a function that allows for the decision to let a signal through or not; for instance, the previously-mentioned sigmoid function).

Figure 02 an electron microscope. Connections between brain cells, into a large neural network, were identified with an older version of this technology.

Image license and attribution: David J Morgan from Cambridge, UK / CC BY-SA (https://creativecommons.org/licenses/by-sa/2.0) Retrieved on April 23, 2020 from here

Nevertheless, maybe some brainy context and association might spark an interest in one or other learner. It might spark a vision for future research and development to contextualize these artificial networks by means of analogies with the slightly more tangible biological world. After all, these biological systems we know as brains, or as nervous systems, are amazing in their signal processing potentials. A hint of this link can also be explored in Neuromorphic Engineering and Computing.

The word “neuron” comes from Ancient Greek and means ‘nerve’. A ‘neuron,’ in anatomy or biology at large, is a nerve cell within the nervous system of a living organism (of the animal kingdom, but not sponges), such as mammals (e.g. humans). By means of small electrical (electro-chemical) pulses (i.e. nerve impulses), these cells communicate with other cells in the nervous system. Such a connection, between these types of cells, is called a synapse. Note, neurons cannot be found among fungi nor among plants (these do exchange signals, even between fungi and plants yet, in different chemical ways)… just maybe they are a steppingstone for one or other learner to imagine an innovative way to process data and compute outputs!

The idea here is that a neuron is “like a logic gate [i.e. ‘a processing element’] that receives input and then, depending on a calculation, decides either to fire or not.[6] Here, the verb “to fire” can be understood as creating an output at the location of the individual neuron. Also note, that a “threshold” is again implied here.

An Artificial Neural Network can then be defined as “…а computing ѕуѕtеm made up of a number of ѕimрlе, highlу intеrсоnnесtеd рrосеѕѕing elements, which рrосеѕѕ infоrmаtiоn by thеir dуnаmiс ѕtаtе response to еxtеrnаl inputs.”[7]

Remember, ‘neuron’, in an ANN, it should be underlined again, is a relatively simple mathematical function. It is, in general, agreed that this function is analogous to a node. Therefore, one can state that an Artificial Neural Network is built up of layers of interconnected nodes.[8] So, one can notice, in or surrounding the field of AI, that words such as unit, node, neuron or perceptron used interchangeably, while these are not identical in their deeper meaning. More recently the word “capsule” has been introduced, presenting an upgraded version of the traditional ‘node,’ the latter equaling one ‘neuron.’ Rather, a capsule is a node in a network equaling a collection of neurons.[9] A little bit of additional information on this can be found here below.

How could an analogy with the brain be historically contextualized? In the early 1950s, with the use of electron microscopy, it was proven that the brain exists of cells, which preceding were labelled as “neurons”.[10] It unequivocally showed the interconnectedness (via the neuron’s extensions, called axons and dendrites) between these neurons, into a network of a large number of these cells. A single of these type of locations of connection between neurons has been labeled as a “synapse”.

Since then it has been established that, for instance, the human cerebral cortex contains about 160 trillion synapses (that’s a ‘160’ and another 12 zeros: 160000000000000) between about a 100 billion neurons (100000000000). Synapses are the locations between neurons where the communication between the cells is said to occur.[11] In comparison some flies have about 100000 neurons and some worms a few hundreds.[12] The brain is a “complex, nonlinear, and parallel computer (information-processing system)”.[13] The complexity of the network comes with the degree of interconnectedness (remember, in a brain that’s synapses).

Whereas it is hard for (most) humans to multiply numbers at astronomically fast speeds, it is easy for a present-day computer. While it is doable for (most) humans to identify what a car is and what it might be doing next, this is (far) less evident for a computer to (yet) handle. This is where, as one of many examples, the study and developments of neural networks (and now also Deep Learning) within the field of AI has come in handy, with increasingly impressive results. The work is far from finished and much can still be done.

The field of study of Artificial Neural Networks is widely believed to have started a bit earlier than the proof of connectivity of the brain and its neurons. It is said to have begun with the 1943 publication by Dr. McCulloh and Dr. Pitts, and their Threshold Logic Unit (TLU). It was then followed by Rosenblatt’s iterations of their model (i.e. the classical perceptron organized in a single layered network) which in turn was iterated upon by Minsky and Papert. Generalized, these were academic proposals for what one could understand as an artificial ‘neuron’, or rather, a mathematical function that aimed to mimic a biological neuron, and the network made therewith, as somewhat analogously found within the brain.[14]

Note, the word ‘threshold’ is of use to consider a bit further. It implies some of the working of both the brain’s neurons and of ANNs’ units. A threshold in these contexts, implies the activation of an output if the signal crosses the mathematically-defined “line” (aka threshold). Mathematically, this activation function can be plotted by, for instance, what is known as a sigmoid function (presently less used). The sigmoid function was particularly used in the units (read in this case: ‘nodes’ or ‘neurons’ or ‘perceptrons’) of the first Deep Learning Artificial Neural Networks. Presently, the sigmoid function is at times being substituted with improved methods such as what is known as “ReLu” which is short for ‘Rectified Linear Unit’. The latter is said to allow for better results and is said to be easier to manage in very deep networks.[15]

Turning back to the historical narrative, it was but 15 years later than the time of the proposal of the 1943 Threshold Logic Unit, in 1958, with Rosenblatt’s invention and hardware design of the Mark I Perceptron —a machine aimed at pattern recognition in images (i.e. image recognition) — that a more or less practical application of such network had been built.[16] As suggested, this is considered being a single-layered neural network.

This was followed by a conceptual design from Minsky and Papert, considering the multilayered perceptron (MLP), using a supervised learning technique. The name gives it away, this is the introduction of the multi-layered neural network. While hinting at nonlinear functionality,[17] yet this design was still without the ability to perform some basic non-linear logical functions. Nevertheless, the MLP was forming the basis for the neural network designs as they are developed presently. Presently, Deep Learning research and development has advanced beyond these models.

Simon Haykin puts it with a slight variation in defining a neural network when he writes that it is a “massively parallel distributed processor, made up of simple processing units, that has a natural propensity for storing experiential knowledge and making it available for use. It resembles the brain in two respects: 1. Knowledge is acquired by the network from its environment through a learning process. 2. Inter-neuron connection strengths, known as synaptic weights, are used to store the acquired knowledge.”[18]

Let us shortly touch on the process of learning in the context of an ANN and that with a simplified analogy. One way to begin understanding the learning process, or training, of these neural networks, in a most basic sense, can be done by looking at how a network would (ignorantly) guess the conversion constant between kilometers and miles without using algebra. One author, Tariq Rashid, offers the following beautifully simple example in far more detail. The author details an example where one can imagine the network honing in on the conversion constant between, for instance, kilometers and miles.

Summarized here: The neural network could be referencing examples. Let us, as a simple example, assume it ‘knows’ that 0 km equals 0 miles. It also ‘knows’, from another given example, that 100 km is 62.137 miles. It could ‘guess’ a number for the constant, given that it is known that 100 (km) x constant = some miles. The network randomly could, very fast, offer a pseudo-constant guessed as 0.5. Obviously, that would create an error compared to the given example. In a second guess it could offer 0.7. This would create a different kind of error. The first is too small and the second is too large. The network consecutively undershot and then overshot the needed value for the constant.

By repeating a similar process, whereby a next set of numbers (= adjusted parameters internal to the network) is between 0.5 and 0.7 with one closer to the 0.5 and the others closer to 0.7, the network gets closer in estimating the accurate value for its needed output (e.g. 0.55 and 0.65; then next 0.57 and 0.63, and so on). The adjusting of the parameters would be decided by how right or wrong the output of the network model is compared to the known example that is also known to be true (e.g. a given data set for training). Mr. Rashid’s publication continues the gentle introduction into supervised training and eventually building an artificial neural network.

In training the neural network to become better at giving the desired output, the network’s weights and biases (i.e. its parameters) are tweaked. If the output has a too large an error, the tweaking processes is repeated until the error in the output is acceptable and the network has turned out to be a workable model to make a prediction or give another type of output.

In the above example one moves forward and backward until the smallest reasonable error is obtained. This is, again somewhat over-simplified how a backpropagation algorithm functions in the training process of a network towards making it a workable model. Note, “propagate” means to grow, extend, spread, reproduce (which, inherently, are forward movements over time).

These types of network, ANNs or other, are becoming both increasingly powerful and diversified. They also are becoming increasingly accurate in identifying and recognizing patterns in certain data sets of visual (i.e. photos, videos), audio (i.e. spoken word, musical instruments, etc.) or other nature. These are becoming more and more able to identify patterns, as well as humans are able to and beyond what humans are able to handle.[19]

Dr. HINTON, Geoffrey[20]  is widely considered as one of the leading academics in Artificial Neural Networks (ANNs) and specifically seen as a leading pioneer in Deep Learning.[21] Deep Learning, a type of Machine Learning, is highly dependent on various types of Artificial Neural Network.[22] Dr. Hinton’s student, Alex Krizhevsky, noticeably helped to boost the field of computer vision by winning the 2012 ImageNet Competition and this by being the first to use a neural network.

To round the specific ‘ANN’ introduction up, let us imagine, perhaps in the processes of AI research and specifically in its area similar to those of ANNs, solutions can be thought up or are already being thought of that are less (or more) brain-like or for which the researchers might feel less (or more) of a need to make an analogy with a biological brain. Considering processes of innovation, one might want to keep an open-mind to these seemingly different meanderings of thought and creation.

Going beyond the thinking of ANNs, one might want to fine-tune an understanding and also consider diversity in forms and functions of these or other such networks. There are, for instance, types going around with names such as ‘Deep Neural Networks’ (DNNs) which, are usually extremely large and are usually applied to process very large sets of data.[23] One can also find terminologies such as the ‘Feedforward Neural Networks’ (FFNNs), which is said to be slightly more complex than the traditional and old-school perceptron networks;[24] ‘Convolutional Neural Networks’ (CNNs), which are common in image recognition; ‘Recurrent Neural Networks’ (RNNs) and its sub-type of ‘Long Short-term Memory’ networks (LSTM), which apply feedback connection and which are used in Natural Language Processing. These latter networks are claimed to still apply sigmoid functions, contrary to the increased popularity of other functions.[25] All of these and more are studied and developed in the fields of Machine Learning and Deep Learning. All these networks would take us rather deep into the technicalities of the field. You are invited to dig deeper and explore some of the offered resources.

It might be worthwhile to share that CNN solutions are particularly well-established in computer vision. The neurons specialized in the visual cortex of the brain and how these do or do not react to the stimuli coming into their brain region from the eyes, were used as an inspiration in the development of the CNN. This design helped to reduce some of the problems that were experienced with the traditional artificial neural networks. CNNs do have some shortcomings, as many of these cutting-edge inventions stil need to be further researched and fine-tuned.[26]

Capsule Networks (CapsNets)

In the process of improvement, innovation and fine-tuning, there are new networks continuously being invented. For instance, in answering some of the weaknesses of ‘Convolutional Neural Networks’ (CNNs), the “Capsule Networks (CapsNets)” are a relative recent invented answer, from a few years ago, by Hinton and his team.[27] It is also believed that these CapsNets mimic better how a human brain processes vision then what the CNNs have been enabled to offer up till now.

To put it too simple, it’s an improvement onto the previous versions of  nodes in  a network (a.k.a. ‘neurons’) and a neural network. It tries to “perform inverse graphics”, where inverse graphics is a process of extracting parameters from a visual that can identify location of an object within that visual. A capsule is a function that aids in the prediction of the “presence and …parameters of a particular object at a given location.[28] The network hints at outperforming the traditional CNN in a number of ways such as the increased ability to identify additional yet functional parameters associated with an object. One can think of orientation of an object but also of its thickness, size, rotation and skew, spatial relationship, to name but a few.[29] Although a CNN can be of use to identify an object, it cannot offer an identification of that object’s location. Say a mother with a baby can be identified. The CNN cannot support the identification whether they are on the left of one’s visual field versus the same humans but on the right side of the image.[30] One might imagine the eventual use of this type of architectures in, for instance, autonomous vehicles.

Generative Adversarial Networks (GANs)

This type of machine learning method, a Generative Adversarial Network (GAN), was invented in 2014 by Dr. Goodfellow and Dr. Bengio, among others.[31]

Figure 03 This young lady does not (exactly) physically exist. The image was created by a GAN; a StyleGAN based on the analysis of photos of existing individuals. Source: public domain (since it is created by an AI method, and a method is not a person, it is not owned). Retrieved March 10, 2020 from here

It’s an unsupervised learning technique that allows to go beyond historical data (note, it is debatable that, most if not all data is inherently historical from the moment following its creation). In a most basic sense, it is a type of interaction, by means of algorithms (i.e. Generative Algorithms), between two Artificial Neural Networks.

The GANs allow to create new data (or what some refer to as “lookalike data”)[32] by applying features, by means of certain identified features, from the historical referenced data. For instance, a data set, existing of what we humans perceive as images, and then of a specific style, can allow this GANs’ process to generate a new (set of) image(s) in the style of the studied set. Images are but one media. It can handle digital music, digitized artworks, voices, faces, video… you name it. It can also cross-pollinate between media types, resulting in a hybrid between a set of digitized artworks and a landscape, resulting in a landscape “photo” in a style of the artwork data set. The re-combinations and reshufflings are quasi unlimited. Some more examples are of GANs types are those that can

  • …allow for black and white imagery to be turned into colorful ones in various visual methods and styles.[33]
  • …turn descriptive text of, say different birds into photo-realistic bird images.[34]
  • …create new images of food based on their recipes and reference images.[35]
  • …turn a digitized oil painting into a photo-realistic version of itself; turning a winter landscape into a summer landscape, and so on.[36]

If executed properly, for instance, the resulting image could make an observer (i.e. a discriminator) decide that the new image (or data set) is as (authentic as) the referenced image(s) or data set(s) (note: arguably, in the digital or analog world, an image or any other media of content is a data set).

It is also a technique whereby two neural networks contest with each other. They do so in a game-like setting as it is known in the mathematical study of models of strategic decision-making, entitled “Game Theory.” Game Theory is not to be confused with the academic field of Ludology, the latter which is the social, anthropological and cultural study of play and game design. While often one network’s gain is the other network’s loss (i.e. a zero-sum game), this is not always necessarily the case with GANs.

It is said that GANs can also function and offer workable output with relatively small data sets (which ic an advantage compared to some other techniques).[37]

It has huge applications in the arts, advertising, film, animation, fashion design, video gaming, etc. These professional fields are each individually known as multi-billion dollar industries. Besides entertainment it is also of use in the sciences such as physics, astronomy and so on.

Applications

One can learn how to understand and build ANNs online via a number of resources. Here below are a few hand-picked projects that might offer a beginner’s taste to the technology.

Project #___: Making Machine Learning Neural Networks (for K12 students by
Oxford University)
Project source: 
https://ecraft2learn.github.io/ai/AI-Teacher-Guide/chapter-6.html
Project #___: Rashid, T. (2016). Make Your Own Neural Network
A project-driven book examining the very basics of neural networks and aiding a learning step by step into creating a network. Published as eBook or paper via CreateSpace Independent Publishing Platform.
This might be easily digested by Middle Schools students or learners who cannot spend too much effort yet do want to learn about neural networks in an AI context.
information retrieved on April 2, 2020 from http://makeyourownneuralnetwork.blogspot.com/
Project #___:  A
small example: Training a model to estimate square roots (click on the image to
enter the SNAP! environment)
Project source: 
https://ecraft2learn.github.io/ai/AI-Teacher-Guide/chapter-6.html
Project #___:  Training
a model to label data (click on the image to enter the SNAP! environment)
Project source: 
https://ecraft2learn.github.io/ai/AI-Teacher-Guide/chapter-6.html
Project #___:  Training
a model to predict how you would rate abstract "art"
Project source: 
https://ecraft2learn.github.io/ai/AI-Teacher-Guide/chapter-6.html
Project #___: A Neural Network to recognize hand-written digits
This
project comes with an online book and code by Michael Nielsen. 
Source code: 
https://github.com/mnielsen/neural-networks-and-deep-learning  
Updated source code: https://github.com/MichalDanielDobrzanski/DeepLearningPython35 
Database:
http://yann.lecun.com/exdb/mnist/ (a training set of 60,000 examples, and a test set of 10,000 examples)
Study material: 
http://neuralnetworksanddeeplearning.com/chap1.html   
Project #___: MuZero: Build a Neural Network using Python[1] 
project source: https://medium.com/applied-data-science/how-to-build-your-own-muzero-in-python-f77d5718061a
[1] Schrittwieser, J. et al. (2020). Mastering Atari, Go, Chess and Shogi by
Planning with a Learned Model. Online: arXiv.org, Cornell University;
Retrieved on April 1, 2020 from  https://arxiv.org/abs/1911.08265 

References & URLs

[1]Sanderson, G.  (? Post-2016).  3BLUE1BROWN SERIES. But what is a Neural Network? | Deep Learning, chapter 1.  S3 • E1 (Video). Online. Retrieved on April 22, 2020 from https://www.bilibili.com/video/BV12t41157gx?from=search&seid=15254673027813667063 AND the entire series: https://search.bilibili.com/all?keyword=3Blue1Brown&from_source=nav_suggest_new AND https://www.youtube.com/watch?v=aircAruvnKk Information Retrieved from https://www.3blue1brown.com/about

[2] Nielsen, M. (2019). Neural Networks and Deep Learning. Online: Determination Press. Retrieved on April 24, 2020 from http://neuralnetworksanddeeplearning.com/  AND https://github.com/mnielsen/neural-networks-and-deep-learning AND http://michaelnielsen.org/

[3] Marsland, S. (2015). Machine Learning. An Algorithmic Perspective. Boca Raton, FL, USA: CRC Press. p.14

[4] Charniak, E. (2018). Introduction to Deep Learning. Cambridge, MA: The MIT Press p.51

[5] Sanderson, G.  (? Post-2016). 

[6] Du Sautoy, M. (2019). The Creative Code. How AI is Learning to Write, Paint and Think. London: HarperCollins Publishers. pp.117

[7]Dr. Hecht-Nielsen, Robert in Caudill, M. (December, 1987).  “Neural Network Primer: Part I”. in AI Expert Vol. 2, No. 12, pp 46–52. USA:      Miller Freeman, Inc. Information Retrieved on April 20, 2020 from https://dl.acm.org/toc/aiex/1987/2/12 and  https://dl.acm.org/doi/10.5555/38292.38295 ; citation retrieved from https://www.oreilly.com/library/view/hands-on-automated-machine/9781788629898/0444a745-5a23-4514-bae3-390ace2dcc61.xhtml

[8] Rashid, T.  (2016). Make Your Own Neural Network.  CreateSpace Independent Publishing Platform

[9] Sabour, S. et al. (2017). Dynamic Routing Between Capsules. Online: arXiv.org, Cornell University; Retrieved on April 22, 2020 from https://arxiv.org/pdf/1710.09829.pdf

[10] Sabbatini, R. (Feb 2003). Neurons and Synapses. The History of Its Discovery. IV. The Discovery of the Synapse. Online: cerebromente.org. Retrieved on April 23, 2020 from http://cerebromente.org.br/n17/history/neurons4_i.htm

[11] Tang Y. et al (2001). Total regional and global number of synapses in the human brain neocortex. In Synapse 2001;41:258–273.

[12] Zheng, Z., et al. (2018). A Complete Electron Microscopy Volume of the Brain of Adult Drosophila melanogaster. In Cell, 174(3), 730–743.e22

[13] Haykin, S. (2008). Neural Networks and Learning Machines. New York: Pearson Prentice Hall. p.1

[14] McCulloch, W.. & Pitts, W. (1943; reprint: 1990). A Logical Calculus of the Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics, Vol. 5, pp.115-133. Retrieved online on February 20, 2020 from  https://www.cs.cmu.edu/~./epxing/Class/10715/reading/McCulloch.and.Pitts.pdf  

[15] Sanderson, G.  (? Post-2016). 

[16] Rosenblatt, F. (January, 1957). The Perceptron. A Perceiving and Recognizing Automaton. Report No. 85-460-1. Buffalo (NY): Cornell Aeronautical Laboratory, Inc. Retrieved on January 17, 2020 from https://blogs.umass.edu/brain-wars/files/2016/03/rosenblatt-1957.pdf 

[17] Samek, W. et al (2019). Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Lecture Notes in Artificial Intelligence. Switzerland: Springer. p.9

[18] Haykin, S. (2008). p.2

[19] Gerrish, S. (2018). How Smart Machines Think. Cambridge, MA: The MIT Press. pp. 18

[20] Gerrish, S. (2018). pp. 73

[21] Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (9 October 1986). “Learning representations by back-propagating errors”. Nature. 323 (6088): 533–536

[22] Montavon, G. et al. (2012). Neural Networks: Tricks of the Trade. New York: Springer. Retrieved on March 27, 2020 from https://link.springer.com/book/10.1007/978-3-642-35289-8 AND https://machinelearningmastery.com/neural-networks-tricks-of-the-trade-review/   

[23] de Marchi, L. et al. (2019). Hands-on Neural Networks. Learn How to Build and Train Your First Neural Network Model Using Python. Birmingham & Mumbai: Packt Publishing. p. 9.

[24] Charniak, E. (2018). Introduction to Deep Learning. Cambridge, MA: The MIT Press. p. 10

[25] de Marchi, L. et al. (2019). p. 118-119.

[26] Géron, A. (February, 2018). Introducing capsule networks. How CapsNets can overcome some shortcomings of CNNs, including requiring less training data, preserving image details, and handling ambiguity. Online: O’Reilly Media. Retrieved on April 22, 2020 from https://www.oreilly.com/content/introducing-capsule-networks/

[27] Sabour, S. et al. (2017)

[28] Géron, A. (2017). Capsule Networks (CapsNets) – Tutorial (video). Retrieved on April 22, 2020 from https://www.bilibili.com/video/av17961595/ AND  https://www.youtube.com/watch?v=pPN8d0E3900

[29] Géron, A. (February, 2018).

[30] Tan, K. (November, 2017).  Capsule Networks Explained. Online. Retrieved on April 22, 2020 from https://kndrck.co/posts/capsule_networks_explained/ AND https://gist.github.com/kendricktan/9a776ec6322abaaf03cc9befd35508d4

[31] Goodfellow, I. et al. (June 2014). Generative Adversarial Nets. Online: Neural Information Processing Systems Foundation, Inc  Retrieved on March 11, 2020 from https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf AND Online: arXiv.org, Cornell University;  https://arxiv.org/abs/1406.2661

[32] Skanski, S. (2020). Guide to Deep Learning. Basic Logical, Historical and Philosophical Perspectives. Switzerland: Springer Nature. p. 127

[33] Isola, P. et al. (2016, 2018). Image-to-Image Translation with Conditional Adversarial Networks. Online: arXiv.org, Cornell University; Retrieved on April 16, 2020 from https://arxiv.org/abs/1611.07004

[34] Zhang, H. et al. (2017). StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks. Online: arXiv.org, Cornell University; Retrieved on April 16, 2020 from https://arxiv.org/pdf/1612.03242.pdf

[35] Bar El, O. et al. (2019). GILT: Generating Images from Long Text. Online: arXiv.org, Cornell University; Retrieved on April 16, 2020 from https://arxiv.org/abs/1901.02404

[36] Zhu, J. (2017). Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. Online: arXiv.org, Cornell University; Retrieved on April 16, 2020 from https://arxiv.org/pdf/1703.10593.pdf

[37] Skanski, S. (2020). p.127

The Field of AI (Part 04): AI APPROACHES AND METHODS

AI & Games

A History of AI Research with Games

Games (computer games and board games alike) have been used in AI research and development since the early 1950s. Scientist and engineers focus on games to measure certain stages of success in AI developments. Game settings form a closed testing environment, as if it were a lab, within a specific set of rules and steps. Games have a clear objective or a clear set of goals. Games also allow to research and understand possible applications of probability (e.g. calculate the chances of winning if certain parameters are met or followed). Since very specific and focused problems need to be solved in specific game architectures, games are ideal to test Narrow AI applications.

Narrow AI solutions, are what have been achieved by scientist, so far, as opposed to a ‘General AI’ solution. A General AI solution (or ‘Strong AI’) would be a super-intelligent construct able to solve many, if not any, humanly-thinkable problem and / or beyond. The latter is still science fiction (until it is not). The former, Narrow AI solutions, exists in many applications and can be tested in a game setting. Results in such AI designs, within game play, can following be transcoded into other areas (e.g. solutions for language translation, speech recognition, weather forecast, sales predictions, autonomously operating mechanical arms, managing efficiency in a country’s electric grid[1] or other systems).

Some Narrow AI solutions use a method of Machine Learning that is called “Reinforcement Learning.” In simple terms, it is a way of learning by rewards or scoring. For that reason too, games are an obvious environment that can be used infinitely, to test and improve an AI application. Games lead to rewards or scores; one can even win them.

Moreover, a (computer) game can be played by multiple copies or versions of an AI solution, speeding up the process to reach the best solution or strategy (to win). The latter can, for instance, be achieved by means of “evolutionary algorithms.”  These are algorithms that are improving themselves through, for instance, mutations or  a process of selection as if through a biological natural selection of the fittest (i.e. an autonomous selecting, by means of a process, of a version or offspring of an algorithm that is better at solving something, while ignoring another that is not). Though, if the AI a plays a computer game that has a bug, it might exploit the bug to win, instead of learning the game[2].

Chess has been one of the first games, besides checkers, to have been approached by the AI research community.[3] As mentioned previously, in the mid-1950s Dr. SAMUEL, Arthur wrote a checkers program. A few years earlier (circa 1951) trials were made to write applications for both chess (by Dr. PRINZ, Dietrich) and checkers (by Dr. STRACHEY, Christopher). While these earliest attempts are presently perhaps dismissed as not really being a type of AI application (since, at times, some coding tricks were used), in those days they were a modest, yet first, benchmark of what was to come in the following decades.

For instance, on May 11 1997, the computer named “Deep Blue” beat Mr. KASPAROV,[4] the chess world champion of that time. A number of such achievements have followed covering a number of games. Compared to today’s developments Deep Blue is no longer that impressive. A few years ago, in 2014, by using a form of Machine Learning, namely Deep Learning, AlphaGo defeated the world champion Mr. Lee Sedol at Wéiqí (also known as the game of Go). That AI solution was later surpassed, by AlphaGo Zero (aka AlphaZero). This system used yet another form of Machine Learning, namely Reinforcement Learning (a method mentioned here previously). This AI architecture played against itself and then against AlphaGo. AlphaZero won all of the Wéiqí games from AlphaGo.

In 2017, LěngPūDàshī, the poker-playing AI, defeated some of the world’s top players in the Texas Hold ‘Em poker game. Now scientists are trying to defeat complex real-time online strategy video games players with AI solutions. While such games might not often be taken seriously by some people, they are, technically and through the lens of AI developments, far more complex then, for instance, a chess game. Some successes have already been booked: On April 17, 2019 an AI solution defeated Dota 2 champions. Earlier that same year, human players were defeated at a game of StarCraft II.  Note, the same algorithm that was trained to play Dota 2 can also be taught to move a mechanical hand. Improvements, in benchmarking AI solutions with games, do not stop.[5]

As blood glucose concentration rises, the pancreas secretes insulin to decrease the concentration of blood sugar online viagra australia and reduces cholesterol level. Vitamin E – Like vitamin C, this fat-soluble vitamin is a powerful antioxidant that works to fight diseases, as well cialis prices discount here as signs of early aging. So, men should be more careful about their sexual health and they picked pure natural herbs to take care of the fraud companies that can kill the enthusiasm of millions of such couples & deprive them of their sex. purchase female viagra If a product is promoted as something that can cure impotence sildenafil online india permanently and you can experience a normal sexual drive.

Hands-on Learning with AI Research through Games

Project #___ : Build your own Game
AI with TIC TAC TOE (Arduino version):
Project source: 


Project #___ : TIC TAC TOE Iteration #2 (SCRATCH implementations):
Project Source:

Project #___ : TIC TAC TOE
Iteration #3 (Berkeley’s SNAP! + Oxford AI implementation for K12):
Project Source:

[1] Anthony, S. (March 14, 2017). DeepMind in talks with the National Grid to reduce UK energy use by 10%. Online: ars technica. Retrieved February 14, 2020 from https://arstechnica.com/information-technology/2017/03/deepmind-national-grid-machine-learning/

[2] Vincent, J. (February 28, 2018). A Video game-playing AI beat Q*bert in a way no one’s ever seen before. Online: The Verge. Retrieved February 14, 2020 from https://www.theverge.com/tldr/2018/2/28/17062338/ai-agent-atari-q-bert-cracked-bug-cheat

[3] Copeland, J. (May, 2000). What is Artificial Intelligence? Sections: Chess. Online: AlanTuring.net Retrieved February 14, 2020 from http://www.alanturing.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI12.html

[4] Kasparov, G. (March 25, 1996). The Day I Sensed a New Kind of Intelligence. Online: Time Retrieved February 14, 2020 from http://content.time.com/time/subscriber/article/0,33009,984305-1,00.html

[5] An example of the process of continued developments is very well unfolded here: https://deepmind.com/blog/article/alphazero-shedding-new-light-grand-games-chess-shogi-and-go ; URL last checked on March 10, 2020

The Field of AI (Part 03): A Recent History


A Consideration on Stories of Recent Histories

This story is not a fixed point. This one story here below is neither one that controls all possible AI stories. We are able to note that a history, such as this one, is a handpicking from a source that is richer than the story that is consequentially put in front of you, here, on a linear and engineered chronology. The entirety of the field of AI is more contextualized with parallel storylines, faded-in trials, and faded-out errors, with many small compounded successes and numerous complexities. Histories tend to be messy. This story does not pretend to do justice to that richness.

Just like a numerical dataset, history is a (swirling) pool of data. Just as an information processing unit, hopefully enabled to identify a relevant pattern that still could be prone to an unwanted bias, ambiguities, and probabilities with given uncertainties, so too is this narrative of a history on the dynamic field of AI studies, its researches and its developments. In realizing this, one can only wish that the reader of this story shares the wish to grow towards an increased self-reliant literacy, nurtured with “data” (read the word “data” here as “historical resources” among more numerical sources) and analytical ability.

A Suggested Mini Project Concept for Students:

Mini Project #___ : 
Datasets & Pattern Recognition Opportunities are Everywhere
The above consideration could be part of any storytelling, and its implication is not an insurmountable problem. It’s an opportunity, especially in the area of data sciences associated with AI research and development. See, this story here as an invitation to go out into the field and study more, to get a more nuanced sense of this history’s depths and its adventures within it. Try to see its datasets and their correlations, fluidities, contradictions and complexities. The ability to do so are essential skills in the field of AI as well as important skills as a human in a complex and changing world.
What “algorithm” might the author of this story here have used when recognizing a pattern from a given dataset in creating this story? (there is no right or wrong answer)
It’s almost obvious that a learner can only aspire toward something they know as something that existed, exists or could be imagined to exist. That is simultaneously through for learning from the data selection processes from another, for instance, the authoring of and the applying of the following history of the field of AI.
Can you create your own version of an AI history? What kind of filter, weight, bias or algorithm have you decided to use in creating your version of an AI history? 
Figure 01 Cells from a pigeon brain. Drawing made in 1899, of Purkinje cells (A) and granule cells (B) from pigeon cerebellum by Santiago Ramón y Cajal; Instituto Cajal, Madrid, Spain. Image: Public Domain Retrieved on March 27, 2020 from here


A Recent History of the Field of AI

Just like a present-day AI solution does, a human learner too needs datasets to see their own pattern of their path within the larger field. Who knows, digging into the layers of AI history might spark a drive to innovate on an idea some researchers had touched on in the past yet, have not further developed. This has been known to happen in a number of academic fields of which the field of AI is no exception.[1] Here it is opted to present a recent history of the field of AI[2] with a few milestones from the 20th century and the 21st century:

By the end of the 1930s and during the decade of 1940-1950, scientists and engineers joined with mathematicians, neurologists, psychologists, economists and political scientists to theoretically discuss the development of an artificial brain or of the comparison between the brain, intelligence and what computers could be (note, these did not yet exist in these earliest years of the 1940s).

In 1943, McCulloch & Pitts offered a theoretical proposal for a Boolean logic[3] circuit model of the brain.[4] These could be seen as the theoretical beginnings of what we know today as the Artificial Neural Networks.

In 1950 Turing wrote his seminal paper entitled “Computing Machinery and Intelligence.[5] Slightly later, in the 1950s, early AI programs included Samuel’s checkers game program, Newell & Simon’s Logic Theorist, or Gelernter’s Geometry Engine. It has been suggested that perhaps the first AI program was the Checkers game software. Games, such as Chess, Wéiqí (aka GO) and others (e.g. LěngPūDàshī, the poker-playing AI[6]) have been, and continue to be, important in the field of AI research and development.

While relatively little about it is said to have sustained the test of time, in 1951, a predecessor of the first Artificial Neural Network was created by Marvin Minsky and Dean Edmonds.[7] It was named “SNARC” which is short for “Stochastic Neural Analog Reinforcement Computer”.[8] The hardware system solved mazes. It simulated a rat finding its way through a maze. This machine was not yet a programmable computer as we know it today.

The academic field of “Artificial Intelligence Research” was created between 1955 and 1956.[9] That year, in 1956, the term “AI” was suggested by John McCarthy. He is claimed to have done so during a Dartmouth College conference that same year in Hanover, New Hampshire, the USA. McCarthy defined AI as “… the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable…”[10]

At that same time, the “Logic Theorist” was introduced by other scientists as the first Artificial Intelligence application.[11] It was able to proof a number of mathematical theorems.

In January 1957 Frank Rosenblatt proposed the concept of a single-layered neural network. He invented the photoperceptron (“perceptron” for short), an electronic automaton and model analogous to the brain, in a simplest sense thinkable, that would have the ability to “learn” visual patterns and to process such “…human-like functions as perception, recognition, concept formation, and the ability to generalize from experience… [This system would get its inputs] directly from the physical world rather than requiring the intervention of a human agent to digest and code the necessary information.[12] In short, the machine was aimed to recognize and classify a given geometric shape, following the input from a camera.

It is natural and healthy in the sciences to inquire with intellectual integrity and wonder, to insist for verification, corroboration and falsifiability[13] of theories and designs. So too did the photoperceptron design not escape scrutiny and the common peer review.[14] At one point, the perceptron was perceived to be of debated applicability and of contested interest as a basis for further research and developments within AI.[15] Several decades later, following a couple of “AI Winters” and academic meanderings, substantial increase in computational power and processing techniques, this will turn out to be a fruitful basis for a specific area of AI research and development: Machine Learning, its area of Deep Learning and its multilayered Artificial Neural Networks.[16]

1959: The term Machine Learning” was invented by the IBM electrical engineer and Stanford Professor, Arthur Lee Samuel. He wrote the first successful self-learning program. It played a Checkers game.[17] This was an early demonstration of an AI-type application which will become a hot topic in the 21st century and into present-day academic work.

The period from the nineteen fifties (1950s) into the earliest years of the nineteen seventies (early 1970s): during this two-decades-long period there was a lot of excitement around the promises suggested within the research and developments in the Artificial Intelligence field.

In 1965 Robinson invented an algorithm[18] for logical reasoning that laid the groundwork for a form of computer programming,[19] allowing for the automated proving of mathematical theorems.

Around 1970 Minsky and Papert considered the multilayered perceptron (MLP)[20] which could be seen as a theoretical predecessor to the multilayered neural networks as they are researched and developed today, in the AI sub-field of Machine Learning and its Deep Learning techniques.

Reflecting back onto the years around 1973,[21] voices tend to speak of the first “AI Winter”[22] while others don’t seem to mention this period at all.[23] Either way, it means that during this time, it is perceived that two forces supposedly collided: one was that of some academics and other people with a wish to do research in specific directions in the field of AI. They continued needing money. However, other academics with some understandable doubts[24] and those controlling the funds,[25]  no longer believed much in the (inflated) promises made within the given AI research of that period in history. Since money as a resource became limited, so too did research and development slow down. More focus and result-oriented work was required to obtain funds. At least, so it seemed for a period of time, until after the mid-seventies or until the early Eighties (1980s).[26] Depending on the historical source this time period has been demarcated rather differently (and perhaps views on what counts as significant might differ).[27]

Fading in from the early 1970s and lasting until the early 1990s, the AI research and developmental focus was on what is referred to as Knowledge-based approaches.  Those designing these type of solutions sought to “hard-code knowledge about the world in formal languages…” However, “…none of these projects has led to a major success. One of the most famous such projects is Cyc…”[28] Humans had to physically code the solutions which created a number of concerns and problems. The experts could not sufficiently and accurately code all the nuances of the reality of the world around the topic which the application was supposed to “intelligently” manage.

With the earliest introductions in 1965 by Edward Feigenbaum, one could continue finding further, yet still early, developments of these “Knowledge-based Systems”[29] (KBS). The development of these systems continued into the 1970s, some of which then came to further (commercial) fruition during the 1980s in the form of what was by then called “Expert Systems”(ES). The two systems, KBS and ES, are not exactly the same but they are historically connected.  These later systems were claimed to represent how a human expert would think through a highly specific problem. In this case the processing method was conducted by means of IF-THEN rules. During the 1980s the mainstream AI research and development focused on these “Logic-based, Programmed Expert Systems”. Prolog, a programming language, initially aimed at Natural Language Processing,[30] has been one of the favorites in designing Expert Systems.[31]  All expert systems are knowledge-based systems, the reverse is not true. By the mid-1980s Professor John McCarthy would criticize these systems as not living up to their promises.[32]

In the late Eighties (late 1980s), Carver Mead[33] introduced the idea to mimic the structure and functions of neuro-biological architecture (e.g. of brains or of the eye’s retina and visual perception) in the research and development of AI solutions (both in hardware and software). This approach (especially in chip design) has been increasingly and presently known as “Neuromorphic Engineering”. This is considered a sub-field of Electrical Engineering.

Jumping forward to present-day research and development, “neuromorphic computing” implies the promise of a processing of data in more of an analog manner rather than the digital manner traditionally known in our daily computers. It could, for instance, imply the implementation of artificial neural networks onto a computer chip.  This, for instance, could mean that the intensity of the signal is not bluntly on or off (read: 1 or 0) but rather that there could be a varying intensity.   One could read more in relation to this and some forms of artificial neural networks by, for instance, looking at gates, thresholds, and the practical application of the mathematical sigmoid function; to name but a few.

Simultaneously, these later years, following 1988 until about 1995, some claim, can be referred to as the second surge of an “AI Winter”.[34]  Some seem to put the period a few years earlier.[35] The accuracy of years put aside, resources became temporarily limited again. Research and its output was perceived to be at a low.  Concurrently, one might realize that this does not imply that all research and developments in both computing hardware and software halted during these so-called proverbial winters. The work continued, albeit for some with some additional constraints or under a different name or field (not as “AI”). One might agree that in science, research and development across academic fields seems to ebb, flow and meander yet, persist with grit. 

From 1990 onward slowly, but surely, the concept of probability and “uncertainty” took more of the center-stage (i.e. Bayesian networks). Statistical approaches became increasingly important in work towards AI methods. “Evolution-based methods, such as genetic algorithms and genetic programming” helped to move AI research forward.[36]  It was increasingly hypothesized that a learning agent could adapt to (or read: learn from) the changing attributes in its environment. Change implies the varying higher probabilities of a number of events occurring and the varying lower probability of some other attributes, as events, occurring.

AI solutions started to extract patterns from data sets rather than be guided by a line of code only. This probabilistic approach in combination with further algorithmic developments was gradually heralding a radically different approach from the previous “Knowledge-based Systems’. This approach to AI solutions has warranted in what some refer to as the AI Spring[37] some perceive the past few years up till present day.[38]

In the twenty-first century to present-day, Artificial Neural Networks have been explored in academic research and this with increasing success. More and more, it became clear that huge data sets of high quality  were needed to make a specific area of research in AI, known as Machine Learning, more powerful.

During the first decade of this century Professor Li Feifei oversaw the creation of such a huge and high quality image dataset, which would be one of the milestones in boosting confidence back into the field of AI and the quality of algorithm design.[39]

This story now arrived at the more recent years of AI Research and Development.

Following the first decade of this twenty-first century, an increasing usage of GPUs (graphical processing units) as hardware to power Machine Learning applications could be seen. Presently, even more advanced processing hardware is suggested and applied.

Special types of Machine Learning solution are being developed and improved upon. Specifically Deep Learning appeared on the proverbial stage. The developments in the Artificial Neural Networks and the layering of these networks become another important boost in the perception of potentials surrounding applications coming out of AI Research and Development (R&D).[40]

Deep Learning is increasingly becoming its own unique area of creative and innovative endeavor within the larger Field of AI.

Globally, major investment (in the tens of billions of dollars) have been made into AI R&D. There is a continued and even increasing hunger for academics, experts and professionals in various fields related to or within the field of AI.

The historical context of the field of AI, of which the above is a handpicked narrative, has brought us to where we are today. How this research is applied and will be developed for increased application will need to be studied, tried and reflected on, with continued care, considerate debate, creative spirit and innovative drive.

Your warranty may be found useless when you bring your jailbroken iPhone in for wholesale viagra online cell phone repair solutions at Apple stores. The main work of oral medicines is to relax the smooth muscles around the penile region viagra in uk thereby allowing increased blood flow to get a powerful erection. This allows the scope to levitra tabs be easily used in low light conditions exceptionally good. This annual plant is extremely colorful with flowers from yellow to red and greyandgrey.com online discount cialis gradually darkening leaves during the summer.An ancient Chinese legend says that the plant got his name from a goat herdsman who noticed sexual activity in his flock after they consumed the weed.

Footnotes & URLs to Additional Resources

[1] Olazaran, M. (1996). A Sociological Study of the Official History of the Perceptrons Controversy. Social Studies of Science, 26(3), 611–659. London: Sage Publications

[2]  Here loosely based on: Professor Dan Klein and Professor Pieter Abbee. (January 21st, 2014)  CS188 “Intro to AI” Lecture. UC Berkeley.

[3] George Boole (1815 – 1864) came up with a kind of algebraic logic that we now know as Boolean logic in his works entitled The Mathematical Analysis of Logic (1847) and An Investigation of the Laws of Thought (1854). He also explored general methods in probability. A Boolean circuit is a mathematical model, with calculus of truth values (1 = true; 0 = false) and set membership, which can be applied to a (digital) logical electronic circuitry.

[4] McCulloch, W.. & Pitts, W. (1943; reprint: 1990). A Logical Calculus of the Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics, Vol. 5, pp.115-133. Retrieved online on February 20, 2020 from  https://www.cs.cmu.edu/~./epxing/Class/10715/reading/McCulloch.and.Pitts.pdf   

[5] Turing, A.M. (1950). Computing Machinery and Intelligence. Mind 49: 433-460. Retrieved November 13, 2019 from http://cogprints.org/499/1/turing.html and https://www.csee.umbc.edu/courses/471/papers/turing.pdf

[6] Spice, B. (April 11, 2017). Carnegie Mellon Artificial Intelligence Beats Chinese Poker Players. Online: Carnegie Mellon University. Retrieved January 7, 2020 from https://www.cmu.edu/news/stories/archives/2017/april/ai-beats-chinese.html 

[7] Martinez, E. (2019). History of AI. Retrieved on April 14, 2020 from https://historyof.ai/snarc/

[8] Minsky, M. (2011). Building my randomly wired neural network machine. Online: Web of Stories   Retrieved on April 14, 2020 from https://www.webofstories.com/play/marvin.minsky/136;jsessionid=E0C48D4B3D9635BA883747C9A925B064

[9] Russell, S. and Peter Norvig. (2016) and McCorduck, Pamela. (2004). Machines Who Think. Natick, MA.: A K Peters, Ltd.

[10] McCarthy, J. (2007). What is AI? Retrieved on December 5th, 2019 from http://www-formal.stanford.edu/jmc/whatisai/node1.html This webpage also offers a nice, foundational and simple conversation about intelligence, IQ and related matters. 

[11] McCorduck, P. (2004). Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. Natick: A K Peters, Ltd

[12] Rosenblatt, F. (January, 1957). The Perceptron. A Perceiving and Recognizing Automaton. Report No. 85-460-1. Buffalo (NY): Cornell Aeronautical Laboratory, Inc. p. 1 & 30 Retrieved on January 17, 2020 from https://blogs.umass.edu/brain-wars/files/2016/03/rosenblatt-1957.pdf  

[13] Popper, K. (1959, 2011). The Logic of Scientific Discovery. Taylor and Francis

[14] Minsky, M. and Papert, S.A. (1971). Artificial Intelligence Progress Report. Boston, MA:MIT Artificial Intelligence Laboratory. Memo No. 252.  pp. 32 -34 Retrieved on April 9, 2020 from https://web.media.mit.edu/~minsky/papers/PR1971.html or  http://bitsavers.trailing-edge.com/pdf/mit/ai/aim/AIM-252.pdf

[15] Minsky, M. and Papert, S.A. (1969, 1987). Perceptrons: An Introduction to Computational Geometry. Cambridge, MA: The MIT Press

[16] Olazaran, M. (1996). A Sociological Study of the Official History of the Perceptrons Controversy. Social Studies of Science, 26(3), 611–659. London: Sage Publications

[17] Samuel, A.L. (1959, 1967, 2000). Some Studies in Machine Learning Using the Game of Checkers. Online: IBM Journal of Research and Development, 44(1.2), 206–226. doi:10.1147/rd.441.0206 Retrieved February 18, 2020 from https://dl.acm.org/doi/10.1147/rd.33.0210 and  https://www.sciencedirect.com/science/article/abs/pii/0066413869900044 and https://researcher.watson.ibm.com/researcher/files/us-beygel/samuel-checkers.pdf

[18] It is known as the “unification algorithm”. Robinson, John Alan (January 1965). A Machine-Oriented Logic Based on the Resolution Principle. J. ACM. 12 (1): 23–41 Retrieved on March 24, 2020 from https://dl.acm.org/doi/10.1145/321250.321253 and https://web.stanford.edu/class/linguist289/robinson65.pdf

[19] The form is what now could be referred to as a logic-based declarative programming paradigm = the code is telling a system what you want it does and that by means of formal logic facts and rules for some problem and not exactly by stating how step by step it needs to do it. There are at least 2 main paradigms with each their own sub-categories. This logic-based one is a subcategory of the declarative programming set of coding patterns and standards. The other main paradigm (with its subsets) is imperative programming which includes object-oriented and procedural programming. The latter includes the C language. See Online: Curlie Retrieved on March 24, 2020 from https://curlie.org/Computers/Programming/Languages/Procedural  Examples of (class-based) object-oriented imperative programming languages are C++, Python and R. See: https://curlie.org/en/Computers/Programming/Languages/Object-Oriented/Class-based/

[20] Minsky, M. and Papert, S.A. (1969, 1987) p. 231 “Other Multilayer Machines”.

[21] Lighthill, Sir J. (1972). Lighthill Report: Artificial Intelligence: A General Survey. Retrieved on April 9, 2020 from http://www.chilton-computing.org.uk/inf/literature/reports/lighthill_report/p001.htm and https://pdfs.semanticscholar.org/b586/d050caa00a827fd2b318742dc80a304a3675.pdf and http://www.aiai.ed.ac.uk/events/lighthill1973/

[22] Bostrom, N. (2014). Superintelligence. Paths, Dangers, Strategies. Oxford: Oxford University Press. p. 22

[23] McCorduck, P. (2004). pp. xxviii – xxix

[24] Minsky, M. and Papert, S.A. (1969, 1987)

[25] Historic Examples: Pierce, J. R. et al (1966). Language and Machines: Computers in Translation and Linguistics. Washington D. C.: The Automatic Language Processing Advisory Committee (ALPAC). Retrieved on April 9, 2020 from The National Academies of Sciences, Engineering and Medicine at   https://www.nap.edu/read/9547/chapter/1 alternatively: http://www.mt-archive.info/ALPAC-1966.pdf

[26] Hutchins, W. J. (1995). Machine Translation: A Brief History. In Koerner, E. E.K. .et al (eds). (1995). Concise history of the language sciences: from the Sumerians to the cognitivists. Pages 431-445. Oxford: Pergamon, Elsevier Science Ltd. p. 436

[27] Russell, S. et al. (2016, p.24) doesn’t seem to mention this first “AI Winter” and only mentions the later one, by the end of the 1980s nor does McCorduck, Pamela. (2004) pp. xxviii – xxix. Ghatak, A. (2019, p. vii) however, identifies more than one, as do Maini, V., et al. (Aug 19, 2017) and Mueller, J. P. et al. (2019, p. 133), Chollet, F. (2018). P12 Perhaps these authors, who are mainly focusing on Deep Learning, see the absence of research following the Rosenblatt’s perceptron as a “winter”.

[28] Goodfellow, I., et al. (2016, 2017). Deep Learning. Cambridge, MA: The MIT Press. p. 2

[29] More in-depth information can be found in the journal of the same name: https://www.journals.elsevier.com/knowledge-based-systems

[30] Hutchins, W. J. (1995). p. 436

[31] Some Prolog resources related to expert systems: https://www.metalevel.at/prolog/expertsystems AND https://en.wikibooks.org/wiki/Prolog

[32] McCarthy, J. (1996). Some Expert Systems need Common Sense. Online: Stanford University, Computer Science Department. Retrieved on April 7, 2020 from   http://www-formal.stanford.edu/jmc/someneed/someneed.html

[33] Mead, C. Information Retrieved on April 8, 2020 from http://carvermead.caltech.edu/ also see Mead, C. (1998). Analog VLSI and Neural Systems. Reading, MA: Addison-Wesley from https://dl.acm.org/doi/book/10.5555/64998

[34] Russell (2016) p. 24

[35] McCorduck (2004) p. 418

[36] Bostrom, N. (2014). Superintelligence. Paths, Dangers, Strategies. Oxford: Oxford University Press. pp.24

[37] Manyika, J. et al (2019). The Coming of AI Spring. Online: McKinsey Global Institute. Retrieved on April 9, 2020 from https://www.mckinsey.com/mgi/overview/in-the-news/the-coming-of-ai-spring

[38] Olhede, S., & Wolfe, P. (2018). The AI spring of 2018. Significance, 15(3), 6–7. Retrieved on April 9, 2020 from https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1740-9713.2018.01140.x

[39] Deng, J. et al. (2009). ImageNet: A Large-Scale Hierarchical Image Database. Online: Stanford Vision Lab, Stanford University & Princeton University Department of Computer Science. Retrieved April 7, 2020 from http://www.image-net.org/papers/imagenet_cvpr09.pdf

[40] Trask, A. W. (2019). Grokking Deep Learning. USA: Manning Publications Co. p. 170

The Field of AI (Part 02-6): A “Pre-History” & a Foundational Context

post version: 2 (April 28, 2020)
It has active components that help supply blood to penile organ. sildenafil 100mg tab Source When you read such ads again, tell yourself that it is not really that bad and you can learn to super cialis cheap live with it. This method goes well with their hectic timetable after and during school classes, while parents truly admire distance education the best viagra training. This herb is also known to enhance heart health in men, reduce oxygen damage to LDL or bad cholesterol in men, and protect them cialis 40 mg from the inside.

URLs for A “Pre-History” & a Foundational Context:

  • This post is the main post on a Pre-History & a Foundational context of the Field of AI. In this post a narrative is constructed surrounding the “Pre-History”. It links with the following posts:
  • The post here is a first and very short linking with on Literature, Mythology & Arts as one of the foundational contexts of the Field of AI
  • The second part in the contextualization is the post touching on a few attributes from Philosophy, Psychology and Linguistics
  • Following, one can read about very few attributes picked up on from Control Theory as contextualizing to the Field of AI
  • Cognitive Science is the fourth field that is mapped with the Field of AI.
  • Mathematics & Statistics is in this writing the sixth area associated as a context to the Field of AI
  • Other fields contextualizing the Field of AI are being considered (e.g. Data Science & Statistics, Economy, Engineering fields)


05 — The Field of AI: A Foundational Context: Mathematics & Statistics

Mathematics & Statistics

The word ‘mathematics’ comes from Ancient Greek and means as much as “fond of learning, study or knowledge”. Dr. Hardy, G.H. (1877 – 1947), a famous mathematician, defined mathematics as the study and the making of patterns[1]. At least intuitively, as seen from these different perspectives, this might make a link between the fields of Cognitive Science, AI and mathematics a bit more obvious or exciting to some.

Looking at these two simple identifiers of math, one might come to appreciate math in itself even more but, also one might think slightly differently of  “pattern recognition” in the field of “Artificial Intelligence” and its sub-study of  “Machine Learning.”[2] Following, one might wonder whether mathematics perhaps lies at the foundation of machine or other learning.

Mathematics[3] and its many areas are covering formal proof, algorithms, computation and computational thinking, abstraction, probability, decidability, and so on. Many introductory K-16 resources are freely accessible on various mathematical topics[4] such as statistics.[5]

Statistics, as a sub-field or branch of mathematics, is the academic area focused on data and their collection, analysis (e.g. preparation, interpretation, organization, comparison, etc.), and visualization (or other forms of presentation). The field studies models based on these processes imposed onto data. Some practitioners argue that Statistics stands separately from mathematics.

These following areas of study in mathematics (and more) lie at the foundation of Machine Learning (ML).[6] Yet, it should be noted, one never stops learning mathematics for specialized ML applications:

  • (Bayesian) Statistics[7]
    • Statistics.[8]
    • See a future post for more perceptions on probability
    • Probability[9] Theory[10] which, is applied to make assumptions of a likelihood in the given data (Bayes’ Theorem, distributions, MLE, regression, inference, …);[11]
    • Markov[12] Chains[13] which model probability[14] in processes that are possibly changing from one state into another (and back) based on the present state (and not past states).[15]
    • Linear Algebra[16] which, is used to describe parameters and build algorithm and Neural Network structures;
      • Algebra for K-16[17]. Again, over-simplified, algebra is a major part of mathematics studying the manipulation of mathematical symbols with the use of letters, such as to make equations and more.
      • Vectors[18]            
      • Matrix Algebras[19]
    •  (Multivariate or multivariable) Calculus[20] which, is used to develop and improve learning-related attributes in Machine Learning.
      • Pre-Calculus & Calculus[21]: oversimplified, one can state that this is the mathematical study of change and thus also motion.[22] Note, just perhaps it might be advisable to consider first laying some foundations of (linear) algebra, geometry and trigonometry before calculus.
      • Multivariate (Multivariable) Calculus: instead of only dealing with one variable, here one focuses on calculus with many variables. Note, this seems not commonly covered within high school settings, ignoring the relatively few exceptional high school students who do study it.[23]
        • Vector[24] Calculus (i.e. Gradient, Divergence, Curl) and vector algebra:[25] of use in understanding the mathematics behind the Backpropagation Algorithm, used in present-day artificial neural networks, as part of research in Machine Learning or Deep Learning and the supervised learning technique.
      • Mathematical Series and Convergence, numerical methods for Analysis
    • Set Theory[26] or Type Theory: the latter is similar to the former except that the latter eliminates some paradoxes found in Set Theory.
    • Basics of (Numerical) Optimization[27] (Linear / Quadratic)[28]
    • Other: discrete mathematics (e.g. proof, algorithms, set theory, graph theory), information theory, optimization, numerical and functional analysis, topology, combinatorics, computational geometry, complexity theory, mathematical modeling, …
    • Additional: Stochastic Models and Time Series Analysis; Differential Equations; Fourier’s and Wavelengths; Random Fields;
    • Even More advanced: PDEs; Stochastic Differential Equations and Solutions; PCA; Dirichlet Processes; Uncertainty Quantification (Polynomial Chaos, Projections on vector space)
Mini Project #___ : 
Markov Chains 
Can you rework this Python project by Ms. Linsey Bieda, to use Chinese or another language’s word list?
Project context: https://rarlindseysmash.com/posts/2009-11-21-making-sense-and-nonsense-of-markov-chains 
Code source: https://gist.github.com/3928224 

[1] Hardy. H.R. & Snow, C.P. (1941).  A Mathematician’s Apology. London: Cambridge University Press

[2] More on “pattern recognition” in the field of “Artificial Intelligence” and its sub-study of  “Machine Learning” will follow elsewhere in future posts.

[3] Courant, R. et al. (1996). What Is Mathematics? An Elementary Approach to Ideas and Methods. USA: Oxford University Press  

[4] For instance (in alphabetical order):

[5] Meery, B. (2009). Probability and Statistics (Basic). FlexBook.  Online: CK-12 Foundation. Retrieved on March 31, 2020 from  http://cafreetextbooks.ck12.org/math/CK12_Prob_Stat_Basic.pdf

[6] a sub-field in the field of Artificial Intelligence research and development (more details later in a future post). A resource covering mathematics for Machine learning can be found here:

Deisenroth, M. P. et al. (2020). Mathematics for Machine Learning. Online: Cambridge University Press. Retrieved on April 28, 2020 from https://mml-book.github.io/book/mml-book.pdf AND https://github.com/mml-book/mml-book.github.io

Orland, P. (2020). Math for Programmers. Online: Manning Publications. Retrieved on April 28, 2020 from https://www.manning.com/books/math-for-programmers 

[7] Downey, A.B. (?).Think Stats. Exploratory Data Analysis in Python. Version 2.0.38 Online: Needham, MA: Green Tea Press. Retrieved on March 9, 2020 from http://greenteapress.com/thinkstats2/thinkstats2.pdf

[8] A basic High School introduction to Statistics (and on mathematics) can be freely found at Khan Academy. Retrieved on March 31, 2020 from https://www.khanacademy.org/math/probability

[9] Grinstead, C. M.; Snell, J. L. (1997). Introduction to Probability. USA: American Mathematical Society (AMS). Online: Dartmouth College. Retrieved on March 31, 2020 from https://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/amsbook.mac.pdf AND solutions to the exercises retrieved from http://mathsdemo.cf.ac.uk/maths/resources/Probability_Answers.pdf

[10] Such as: Distributions, Expectations, Variance, Covariance, Random Variables, …

[11] Doyle, P. G. (2006). Grinstead and Snell’s Introduction to Probability. The CHANCE Project. Online: Dartmouth retrieved on March 31, 2020 from https://math.dartmouth.edu/~prob/prob/prob.pdf

[12] Norris, J. (1997). Markov Chains (Cambridge Series in Statistical and Probabilistic Mathematics). Cambridge: Cambridge University Press. Information retrieved on March 31, 2020 from https://www.cambridge.org/core/books/markov-chains/A3F966B10633A32C8F06F37158031739  AND http://www.statslab.cam.ac.uk/~james/Markov/  AND  http://www.statslab.cam.ac.uk/~rrw1/markov/    http://www.statslab.cam.ac.uk/~rrw1/markov/M.pdf AND https://books.google.com.hk/books/about/Markov_Chains.html?id=qM65VRmOJZAC&redir_esc=y

[13] Markov, A. A. (January 23, 1913). An Example of Statistical Investigation of the Text Eugene Onegin Concerning the Connection of Samples in Chains. Lecture at the physical-mathematical faculty, Royal Academy of Sciences, St. Petersburg, Russia. In (2006, 2007). Science in Context 19(4), 591-600. UK: Cambridge University Press. Information retrieved on March 31, 2020 from https://www.cambridge.org/core/journals/science-in-context/article/an-example-of-statistical-investigation-of-the-text-eugene-onegin-concerning-the-connection-of-samples-in-chains/EA1E005FA0BC4522399A4E9DA0304862

[14] Doyle, P. G. (2006). Grinstead and Snell’s Introduction to Probability. Chapter 11, Markov Chains. Dartmouth retrieved on March 31, 2020 from https://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/Chapter11.pdf

[15] A fun and fantasy-rich introduction to Markov Chains: Bieda, L. (2009). Making Sense and Nonsense of Markov Chains. Online, retrieved on March 31, 2020 from https://rarlindseysmash.com/posts/2009-11-21-making-sense-and-nonsense-of-markov-chains AND https://gist.github.com/LindseyB/3928224

[16] Such as: Scalars, Vectors, Matrices, Tensors….

See:

Lang, S. (2002). Algebra. Springer AND

Strang, G. (2016). Introduction to Linear Algebra. (Fifth Edition). Cambridge MA, USA: Wellesley-Cambridge & The MIT Press. Information retrieved on April 24, 2020 from https://math.mit.edu/~gs/linearalgebra/ AND https://math.mit.edu/~gs/AND

Strang, G. (Fall 1999). Linear Algebra. Video Lectures (MIT OpenCourseWare). Online: MIT Center for Advanced Educational Services. Retrieved on March 9, 2020 from https://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/ AND

Hefferon, J. Linear Algebra. http://joshua.smcvt.edu/linearalgebra/book.pdf  AND http://joshua.smcvt.edu/linearalgebra/#current_version  (teaching slides, answers to exercises, etc.)

[17] Algebra basics and beyond can be studied via these resources retrieved on March 31, 2020 from https://www.ck12.org/fbbrowse/list?Grade=All%20Grades&Language=All%20Languages&Subject=Algebra

[18] Roche, J. (2003). Introducing Vectors. Online Retrieved on April 9, 2020 from http://www.marco-learningsystems.com/pages/roche/introvectors.htm

[19] Petersen, K.B & Pedersen, M.S. (November 15, 2012). The Matrix Cookbook. Online Retrieved from http://matrixcookbook.com and https://www2.imm.dtu.dk/pubdb/views/edoc_download.php/3274/pdf/imm3274.pdf

[20] Such as: Derivatives, Integrals, limits, Gradients, Differential Operators, Optimization. …See a leading text book for more details: Goodfellow, I. et al. (2017). Deep Learning. Cambridge, MA: MIT Press + online via www.deeplearningbook.org and its https://www.deeplearningbook.org/contents/linear_algebra.html Retrieved on March 2, 2020.

[21] Spong, M. et al. (20-19). CK-12 Precalculus Concepts 2.0. Online: CK-12 Retrieved on March 31, 2020 from https://flexbooks.ck12.org/cbook/ck-12-precalculus-concepts-2.0/ and more at https://www.ck12.org/fbbrowse/list/?Subject=Calculus&Language=All%20Languages&Grade=All%20Grades

[22] Jerison, D. (2006, 2010). 18.01 SC Single Variable Calculus. Fall 2010. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu. License: Creative Commons BY-NC-SA. Retrieved on March 31, 2020 from https://ocw.mit.edu/courses/mathematics/18-01sc-single-variable-calculus-fall-2010/#

[23] A couple of anecdotal examples can be browsed here: https://talk.collegeconfidential.com/high-school-life/1607668-how-many-people-actually-take-multivariable-calc-in-high-school-p2.html and https://www.forbes.com/sites/johnewing/2020/02/15/should-i-take-calculus-in-high-school/#7360ae8a7625 .  In this latter article references to formal studies are provided; it is suggested to be cautious about taking Calculus, let alone the multivariable type. An online course on Multivariable Calculus for High school students is offered at John Hopkins’s Center for Talented Youth: Retrieved on March 31, 2020 from https://cty.jhu.edu/online/courses/mathematics/multivariable_calculus.html Alternatively, the MIT Open Courseware option is also available: https://ocw.mit.edu/courses/mathematics/18-02sc-multivariable-calculus-fall-2010/Syllabus/

[24] Enjoy mesmerizing play with vectors here: https://anvaka.github.io/fieldplay  

[25] Hubbard, J. H. et al. (2009). Vector Calculus, Linear Algebra, and Differential Forms A Unified Approach. Matrix Editions

[26] The study of collections of distinct objects or elements. The elements can be any kind of object (number or other)

[27] Boyd, S & Vandenberghe, L. (2009). Convex Optimization. Online: Cambridge University Press. Retrieved on March 9, 2020 from https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf

[28] Luke, S. (October 2015). Essentials of Metaheuristics. Online Version 2.2. Online: George Mason University. Retrieved on March 9, 2020 from https://cs.gmu.edu/~sean/book/metaheuristics/Essentials.pdf 

The Field of AI (Part 02-5): A “Pre-History” & a Foundational Context

URLs for A “Pre-History” & a Foundational Context:

  • This post is the main post on a Pre-History & a Foundational context of the Field of AI. In this post a narrative is constructed surrounding the “Pre-History”. It links with the following posts:
  • The post here is a first and very short linking with on Literature, Mythology & Arts as one of the foundational contexts of the Field of AI
  • The second part in the contextualization is the post touching on a few attributes from Philosophy, Psychology and Linguistics
  • Following, one can read about very few attributes picked up on from Control Theory as contextualizing to the Field of AI
  • Cognitive Science is the fourth field that is mapped with the Field of AI.
  • Mathematics & Statistics is in this writing the sixth area associated as a context to the Field of AI
  • Other fields contextualizing the Field of AI are being considered (e.g. Data Science & Statistics, Economy, Engineering fields)


04 — The Field of AI: A Foundational Context: Cognitive Science

Cognitive Science

Cognitive Science combines various fields of academic research into one.[1] This is therefore called an interdisciplinary field, or even more coherently integrated into one: a transdisciplinary field with possibly the involvement of non-academic participants.[2] It touches on the fields of anthropology, psychology, neurology or neuro sciences, biology, health sciences, philosophy, linguistics, computer sciences, and so on.

The work by Roger Shepherd or by Terry Winograd[3] or David Marr, among many others, is considered to have been crucial in the development of this academic field.[4] It is also claimed that Noam Chomsky, as well as the founders of the field of AI, had a tremendous influence on the development of Cognitive Science.[5] The links between the field of Cognitive Science and the field of AI are noticeable in a number of research projects (e.g. see a future post on AGI) and publications.[6]

It is the field that scientifically studies the biological “mental operations” (human and other) as well as the processes and its attributes assigned to or associated with “thinking” and the acquisition of or processes of “language”, “consciousness”, “perception”, “memory”, “learning”, “understanding”, “knowledge”, “creativity”, “emotions”, “mind”, “intelligence”, “motor control,” “vision,” models of intentional processes, the application of Bayesian methods to mental processes or other intellectual functions.[7] Any of these and related terms, through scientific lenses –while seemingly obvious in meaning in a daily use– are very complex, if not debated or contested[8]. The field does research and developments of the “mental architecture” which includes a model both of “information processing and of how the mind is organized.”[9]

Hence, the need for fields such as Cognitive Science. Since these areas are implying different systems, the need for various fields (or disciplines) being a source for Cognitive Science is not only inevitable, it is necessary. The contexts of each individual system (or field, or discipline) is potentially the core research area of a field covering another system. As suggested above, this implies an overlap and integration of other systems (or fields or disciplines, etc.) into one. Following, this requires an increased scientific awareness and practice of inter-dependence between fields of research.

Cognitive Science has developed advances in computational modeling, the creation of cognitive models and the study of computational cognition.[10]

The field of AI, through its history, found inspiration in Cognitive Science for its study of artificial systems. One example is the loose analogy with neurons (i.e. some of the cells making up a brain) and with neural networks (i.e. the connection of such cells) for its mathematical models.

To some extent an AI researcher could take the models distilled, following research in Cognitive Science, for their own research in artificial systems. The bridge between the two are arguably the models and specifically the mathematical models.

Figure 1 Cognitive Science is a multi-disciplinary academic field at the nexus of a number of other fields, including these shown here above. Image in the Public Domain Retrieved on March 18, 2020 from here

Simultaneously, researchers in Cognitive Science can also use solutions found in the field of AI to conduct their research.

Research in Artificial General Intelligence (AGI) partially aims to recreate functions and the implied processes with their This is achieved by firstly inhibiting c-GMP molecules which causes release of nitric oxide in the penile tissues can lead to an outflow of blood from the heart to the body) and Veins (that carry blood back to the heart). generic viagra from usa cute-n-tiny.com There are many results that say that pharmacy online viagra http://cute-n-tiny.com/cute-animals/my-cute-new-kitten/attachment/lilububbles/ knowing the reason for erection along with the usage of kamagra tablets. This type of ED in men with 30s last for a few tadalafil 5mg days only and would not need any sort of medical assistance. It viagra mastercard españa really is through this manner that human being is capable of reproduce. output, which Cognitive Science studies in biological neural networks (i.e. brains).

Some have argued that the field of AI is a sub-field of the field of Cognitive Science, many do not subscribe to this notion. [11] The argument has been made since in the field of AI one can find the research of processes that are innate to the processes found in a brain: sound pattern recognition, speech recognition, object recognition, gesture recognition, and so on which are in turn studied in other fields, such as Cognitive Science. It is more commonly agreed that AI is a sub-field of Computer Science. Still, as stated in the opening lines of this chapter, many do agree with the strong interdisciplinary or transdisciplinary links between the two.[12]


[1] Bermudez J.L.(2014). Cognitive Science. An Introduction to the Science of the Mind. Cambridge: Cambridge University Press. p. 2 Retrieved on March 23, 2020 from https://www.cambridge.org/us/academic/textbooks/cognitivescience

[2] https://semanticcomputing.wixsite.com/website-4

[3] He conducted some of his work at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology (MIT) research program. See Winograd, T. (1972). Understanding Natural Language. In Cognitive Psychology; Volume 3, Issue 1, January 1972, pp. 1 – 191. Boston: MIT; Online” Elsevier. Retrieved on March 25, 2020 from https://www.sciencedirect.com/science/article/abs/pii/0010028572900023   

[4] Bermudez J.L.(2014). pp. 3, 16, and on.

[5] Thagard, Paul, (Spring 2019 Edition). Cognitive Science. In Edward N. Zalta (ed.). The Stanford Encyclopedia of Philosophy. Online: Stanford University. Retrieved on March 23, 2020 from https://plato.stanford.edu/archives/spr2019/entries/cognitive-science/

[6] Gurumoorthy, S. et al. (2018). Cognitive Science and Artificial Intelligence: Advances and Applications. Springer

[7] Green, C. D. (2000). Dispelling the “Mystery” of Computational Cognitive Science. History of Psychology, 3(1), 62–66.

[8] Crowther-Heyck, H. (1999). George A. Miller, language, and the computer metaphor and mind. History of Psychology, 2(1), 37–64

[9] Bermudez J.L.(2014). p. xxix

[10] Houdé, O., et al (Ed.). (2004). Dictionary of cognitive science; neuroscience, psychology, artificial intelligence, linguistics, and philosophy. New York and Hove: Psychology Press;  Taylor & Francis Group.

[11] Zimbardo, P., et al. (2008). Psychologie. München: Pearson Education.

[12]An example thereof is the Bachelor of Science program in “Cognitive Science and Artificial Intelligence” at the Tilburg University, The Netherlands. Retrieved on March 23, 2020 from  https://www.tilburguniversity.edu/education/bachelors-programs/cognitive-science-and-artificial-intelligence

The Field of AI (Part 02-4): A “Pre-History” & a Foundational Context

URLs for A “Pre-History” & a Foundational Context:

  • This post is the main post on a Pre-History & a Foundational context of the Field of AI. In this post a narrative is constructed surrounding the “Pre-History”. It links with the following posts:
  • The post here is a first and very short linking with on Literature, Mythology & Arts as one of the foundational contexts of the Field of AI
  • The second part in the contextualization is the post touching on a few attributes from Philosophy, Psychology and Linguistics
  • Following, one can read about very few attributes picked up on from Control Theory as contextualizing to the Field of AI
  • Cognitive Science is the fourth field that is mapped with the Field of AI.
  • Mathematics & Statistics is in this writing the sixth area associated as a context to the Field of AI
  • Other fields contextualizing the Field of AI are being considered (e.g. Data Science & Statistics, Economy, Engineering fields)


03 — The Field of AI: A Foundational Context: Control Theory


Control Theory

When thinking at a daily and personal level, one can observe that one’s body, the human body’s physiology, seemingly has a number of controls in place for it to function properly.

Humans, among many other species, could be observed showing different types of control. One can observe control in the biological acts within the body. For instance, by the physiological nature of one’s body’s processes; be they more or less autonomous or automatic processes. Besides, for instance, the beating process of the heart, or the workings of the intestines, one could also consider processes within, e.g. the brain and those degrees of control with and through the human senses.

Humans also exert control by means of, for instance, their perceptions, their interpretations, and by a set of rituals and habitual constraints which in turn might be controlled by a set of social, cultural  or in-group norms, rules, laws and other external or internalized constraints.

Really broadening one’s view onto ‘control’: one can find the need for some form and degree of control not only within humans but also in any form of life; in any organism. In effect, to be an organism is an example of a system of cells working together, in an organized and cooperative manner, instrumental to their collective survival as unified into the organism. Come to think of it, an organism can be considered sufficiently organized and working, if some degrees and some forms of shared, synchronized control is underlying their cooperation.

Interestingly enough, to some perhaps, such control is shared, within the organism, with colonies of supportive bacteria; its microbiome (e.g. the human biome). [1] While this seems very far from the topic of this text at the same time, analogies and links between Control Theory, Machine Learning and the biological world are at the foundation of the academic field of AI.[2]

If one were to somewhat abstract the thinking on the topic of ‘control’, then these controlling systems could be seen as a support towards learning from sets of (exchanged) information or data. These systems might engage in such acts of interchanged learning, with possibly the main aim to sustain forms and degrees of stability, through adaptations, depending on needs and contextual changes. At the very least, the research surrounding complex dynamic systems can use insights in both Control Theory and consequentially, the processing potentials as promised within Machine Learning.

Control could imply the constraining of the influence of certain variables or attributes within, or in context of, a certain process. One attribute (e.g. a variable or constant) could control another attribute and vice versa. These interactions of attributes could then be found to be compounded into a more complex system.

Control seems most commonly allowing for the reduction of risks and could allow for a given form and function (not) to exist. The existence of a certain form and function of control can allow for a system (not) to act within its processes.  

When one zooms in and focuses, one can consider that perhaps similar observations and reflections have brought researchers to constructing what is known as “Control Theory.”

Control Theory is the mathematical field that studies control in systems. This is through the creation of mathematical models, and how these dynamic systems optimize their behavior by controlling processes within a given, influencing environment.[3]

Through mathematics and engineering it allows for a dynamic system to perform in a desired manner (e.g. an AI system, an autonomous vehicle, a robotic system).  Control is exercised over the behavior of a system’s processes of any size, form, function or complexity. Control, as a sub-process, could be inherent to a system itself, controlling itself and learning from itself.

In a broader sense, Control Theory[4], can be found in a number of academic fields. For instance, it is found in the field of Linguistics with, for instance, Noam Chomsky[5] and the control of a grammatical contextual construct over a grammatical function. A deeper study of this aspect, while foundational to the fields of Cognitive Science and AI, is outside of the introductory spirit of this section.

As an extension to a human and their control within their own biological workings, humans and other species have created technologies and processes that allow them to exhort more (perceived) control over certain aspects of (their perceived) reality and their experiences and interactions within it.

Looking closer, as it is found in the area of biology and also psychology, with the study of an organism’s processes and its (perceptions of) positive and negative feedback loops. These control processes allow a life form (control of its perception of) maintaining a balance, where it is not too cold or hot, not too hungry and so on; or to act on a changing situation (e.g. start running because fear is increasing).

As one might notice, “negative” is not something “bad” here. Here the word means that something is being reduced so that a system’s process (e.g. heat of a body) and its balance can be maintained and stabilized (e.g. not too cold and not too hot). Likewise, “Positive” here does not (always) mean something “good”. It means that something is being increased. Systems using these kinds of processes are called homeostatic systems.[6] Such systems, among others, have been studied in the field of Cybernetics;[7] the science of control.[8] This field, in simple terms, studies how a system regulates itself through its control and the communication of information[9] towards such control.

These processes (i.e. negative and positive feedback loops) can be activated if a system predicts (or imagines) something to happen. Note: here is a loose link with probability, thus with data processing and hence with some processes also found in AI solutions.

In a traditional sense, a loop in engineering and its Control Theory could, for instance, be understood as open-loop and closed-loop control. A closed loop control shows a feedback function.  This feedback is provided by means of the data sent from the workings of a sensor, back into the system, controlling the functioning of the system (e.g. some attribute within the system is stopped, started, increased or decreased, etc.).

A feedback loop is one control technique. Artificial Intelligence applications, such as with Machine Learning and its Artificial Neural Networks can be applied to exert degrees of control over a changing and adapting system with these, similar or more complex loops. These AI methods too, use applications that found their roots in Control Theory. These could be traced to the 1950s with the Perceptron system (a kind of Artificial Neural Network), built by Rosenblatt.[10] A number of researchers in Artificial Neural Networks and Machine Learning in general found their creative steppingstones in Control Theory. 

The field of AI has links with Cognitive Science or with some references to brain forms and brain functions (e.g. see the loose links with neurons). Feedback loops, as they are found in biological systems, or loops in general, have consequentially been referenced and applied in fields of engineering as well. Here, associated with the field of AI, Control Theory and these loops, are mainly referring to the associated engineering and mathematics used in the field of AI. In association with the latter, since some researchers are exploring Artificial General Intelligence (AGI), it might also increasingly interest one to maintain some degree of awareness of these and other links between Biology and Artificial Intelligence as a basis for sparking one’s research and creative thinking, in context.


[1]  Huang, S. et al. (February 11, 2020). Human Skin, Oral, and Gut Microbiomes Predict Chronological Age. Retrieved on April 13, 2020 from https://msystems.asm.org/content/msys/5/1/e00630-19.full-text.pdf

[2] See for instance, Dr. Liu, Yang-Yu (刘洋彧). “…his current research efforts focus on the study of human microbiome from the community ecology, dynamic systems and control theory perspectives. His recent work on the universality of human microbial dynamics has been published in Nature…” Retrieved on April 13, 2020 from Harvard University, Harvard Medical School, The Boston Biology and Biotechnology (BBB) Association, The Boston Chapter of the Society of Chinese Bioscientists in America (SCBA; 美洲华人生物科学学会: 波士顿分会) at https://projects.iq.harvard.edu/bbb-scba/people/yang-yu-liu-%E5%88%98%E6%B4%8B%E5%BD%A7-phd and examples of papers at https://scholar.harvard.edu/yyl

[3] Kalman, R. E. (2005). Control Theory (mathematics). Online: Encyclopædia Britannica. Retrieved on March 30, 2020 from https://www.britannica.com/science/control-theory-mathematics

[4] Manzini M. R. (1983). On Control and Control Theory. In Linguistic Inquiry, 14(3), 421-446. Information Retrieved April 1, 2020, from www.jstor.org/stable/4178338

[5] Chomsky, N. (1981, 1993). Lectures on Government and Binding. Holland: Foris Publications. Reprint. 7th Edition. Berlin and New York: Mouton de Gruyter,

[6] Tsakiris, M. et al. (2018). The Interoceptive Mind: From Homeostasis to Awareness. USA: Oxford University Press

[7] Wiener, N. (1961). Cybernetics: or the Control and Communication in the Animal and the Machine: Or Control and Communication in the Animal and the Machine. Cambridge, MA: The MIT Press

[8] The Editors of Encyclopaedia Britannica. (2014). Cybernetics. Retrieved on March 30, 2020 from https://www.britannica.com/science/cybernetics

[9] Kline, R. R. (2015). The Cybernetics Moment: Or Why We Call Our Age the Information Age. New Studies in American Intellectual and Cultural History Series. USA: Johns Hopkins University Press.

[10] Goodfellow, I., et al. (2017). Deep Learning. Cambridge, MA: MIT Press. p. 13

The12$ that Pfizer charges for tadalafil cheap in US cannot weigh against the Rs.594 they charge in India due to the lesser GDP and the way local medicines are priced in India. Jamentz notes that managers must be able to levitra viagra cialis recognize whether it hits you really or not. Your wife is dressed very sexy and she knows that you cialis generic no prescription are big fan of her lingerie. Many men nowadays are going through some cheap sildenafil uk worst problems such as too much smoking, work load, stress, etc.


Image Caption:

A typical, single-input, single-output feedback loop with descriptions for its various parts.”

Image source:

Retrieved on March 30, 2020 from here License & attribution: Orzetto / CC BY-SA (https://creativecommons.org/licenses/by-sa/4.0)