In exploring this statement, I wish to take the opportunity to focus on, extrapolate and perhaps contextualize the word “worry” a bit here.
I sense “worry” touches on an important human process of urgency.
What if… we were to consider who might/could be “worried”, and, when “worry” is confused or used as a distracting label. Could this give any interesting insight into our human mental models and processes (not of those who do the worrying but rather of those using the label)?
The term might be unwittingly resulting as if a tool for confusion or distraction (or hype). I think to notice that “worry,” “opposition,” “reflection,” “anxiety” and “critical thought-exercises,” or “marketing rhetorics toward product promotion,” are too easily confused. [Some examples of convoluted confusions might be (indirectly) hinted at in this article: here —OR— here ]
To me, at least, these above listed “x”-terms, are not experienced as equatable, just yet.
As a species, within which a set of humans claims to be attracted to innovation, we might want to innovate (on) not only externals, or symptoms, but also causes, or inherent attributes to the human interpretational processes and the ability to apply nuances therewith, eg, is something “worrying” or is it not (only) “worrying” and perhaps something else / additional that takes higher urgency and/or importance?
I imagine that in learning these distinctions, we might actually “innovate”.
Engaging in a thought-exercise is an exercise toward an increase of considering altered, alternative or nuanced potential human pathways, towards future action and outcomes, as if exploring locational potentials: “there-1” rather then “there-2” or “there-n;” and that rather than an invitation for another to utter: “don’t worry.”
If so, critical thought might not need to be a subscription to “worry” nor the “dismissal” of 1 scenario, 1 technology, 1 process, 1 ideology, etc, over the other [*1]
Then again, from a user’s point of view, I dare venture that the use of the word “worry” (as in “I worry that…”) might not necessarily be a measurable representation of any “actual” state of one’s psychology. That is, an observable behavior or interpreted (existence of an) emotion has been said to be no guaranteed representation of the mental models or processes of they who are observed (as worrying). [a hint is offered here —OR— here ]
Hence, “worry” could be / is at times seemingly used as a rhetorical tool from either the toolboxes of ethos, pathos or logos, and not as an externalization of one’s actual emotional state of that ephemeral moment.
footnote —-• [*1]
Herein, in these distinctions, just perhaps, might lie a practical excercise of “democracy”.
If critical thought, rhetoric, anxiety, opposition are piled and ambiguously mixed together, then one might be inclined to self-censor due to the mere sense of overwhelming confusion of not being sure to be perceived as dealing with one over, or instead of, the other.
A propriety downtownsault.org brand viagra from canada blend of all the natural ingredients in the capsules appeal to the mechanism of physique to offer useful treatment for fixing any sexual disorder. If you want these medicines at a lower cost you can buy cheap online browse this link order cheap levitra through any site. You should also inform your doctor view these guys now lowest cost levitra if you are on right way of body cleanse. Storage It is been suggested to store the pill away from heat and sunlight to avoid wastage and damage to it. cialis tadalafil generico
If I had not constructed “we” then “we” would not exist; more so the reverse, one might so insist
if there were no mi followed by silence in-between it & the do, then sol-itude of the reverberating drone would drown away any harmony
if the “amoeba” did not transgress protoplasticllly, into evolutions, then no organization of organs as enablements through a body, would come to do what I do: an orchestration of neurons, beyond the brain & in baroque-like counterpoint with microbiomes, across a cooperative body I call I
if I were but I then I would not need semiotics, since expressing meaning, as aesthetic or ethic, in-between I & my reflection, would be as Narcissus out of social context: meaninglessly pining away
If I were merely I, I would need not construct money to trade, be mesmerized by titles to bear, not impose soldiers to hold, not pain nor laughter to share. Things would be bare of value; mere things out there
It is the most well-known forms of male sexual online viagra order dysfunction. The unfortunate gentleman, generic levitra pill who was involved in the study, just suffered an extreme reaction to the pathogenic bacteria. Blurred vision, headaches, get viagra prescription back pain, headaches and runny noses are common among users of these prescriptions. Studies have found that poor cheap viagra australia eating habits that cause disability.
If I were but I in an I-land adrift through expanding space, then I would be an aberration as much as what I might wonder about intelligent life among the stars
It is in-between the processes of the physical irrefutability that I is not 1 & that we co-construct meaning, to do for me, so as to be & then let go of the solitudes of I
URLs for A “Pre-History” & a Foundational Context:
This post is the main post on a Pre-History & a Foundational context of the Field of AI. In this post a narrative is constructed surrounding the “Pre-History”. It links with the following posts:
The post here is a first and very short linking with on Literature, Mythology & Arts as one of the foundational contexts of the Field of AI
When thinking at a daily and personal level, one can observe that
one’s body, the human body’s physiology, seemingly has a number of controls in
place for it to function properly.
Humans, among many other species, could be observed showing
different types of control. One can observe control in the biological acts
within the body. For instance, by the physiological nature of one’s body’s processes;
be they more or less autonomous or automatic processes. Besides, for instance,
the beating process of the heart, or the workings of the intestines, one could
also consider processes within, e.g. the brain and those degrees of control
with and through the human senses.
Humans also exert control by means of, for instance, their
perceptions, their interpretations, and by a set of rituals and habitual constraints
which in turn might be controlled by a set of social, cultural or in-group norms, rules, laws and other
external or internalized constraints.
Really broadening one’s view onto
‘control’: one can find the need for some form and degree of control not only
within humans but also in any form of life; in any organism. In effect, to be
an organism is an example of a system of cells working together, in an
organized and cooperative manner, instrumental to their collective survival as
unified into the organism. Come to think of it, an organism can be considered
sufficiently organized and working, if some degrees and some forms of shared, synchronized
control is underlying their cooperation.
Interestingly enough, to some
perhaps, such control is shared, within the organism, with colonies of
supportive bacteria; its microbiome (e.g. the human biome). [1]
While this seems very far from the topic of this text at the same time,
analogies and links between Control Theory, Machine Learning and the biological
world are at the foundation of the academic field of AI.[2]
If one were to somewhat abstract
the thinking on the topic of ‘control’, then these controlling systems could be
seen as a support towards learning from sets of (exchanged) information or data.
These systems might engage in such acts of interchanged learning, with possibly
the main aim to sustain forms and degrees of stability, through adaptations,
depending on needs and contextual changes. At the very least, the research
surrounding complex dynamic systems can use insights in both Control Theory and
consequentially, the processing potentials as promised within Machine Learning.
Control could imply the constraining of the influence of certain
variables or attributes within, or in context of, a certain process. One
attribute (e.g. a variable or constant) could control another attribute and
vice versa. These interactions of attributes could then be found to be
compounded into a more complex system.
Control seems most commonly allowing for the reduction of risks
and could allow for a given form and function (not) to exist. The existence of
a certain form and function of control can allow for a system (not) to act
within its processes.
When one zooms in and focuses, one can consider that perhaps
similar observations and reflections have brought researchers to constructing
what is known as “Control Theory.”
Control Theory is the mathematical field that studies control in
systems. This is through the creation of mathematical models, and how these
dynamic systems optimize their behavior by controlling processes within a given,
influencing environment.[3]
Through mathematics and engineering it allows for a dynamic system
to perform in a desired manner (e.g. an AI system, an autonomous vehicle, a
robotic system). Control is exercised
over the behavior of a system’s processes of any size, form, function or
complexity. Control, as a sub-process, could be inherent to a system itself,
controlling itself and learning from itself.
In a broader sense, Control Theory[4],
can be found in a number of academic fields. For instance, it is found in the
field of Linguistics with, for instance, Noam Chomsky[5]
and the control of a grammatical contextual construct over a grammatical
function. A deeper study of this aspect, while foundational to the fields of
Cognitive Science and AI, is outside of the introductory spirit of this section.
As an extension to a human and
their control within their own biological workings, humans and other species
have created technologies and processes that allow them to exhort more
(perceived) control over certain aspects of (their perceived) reality and their
experiences and interactions within it.
Looking closer, as it is found in
the area of biology and also psychology, with the study of an organism’s processes
and its (perceptions of) positive and negative feedback loops. These control
processes allow a life form (control of its perception of) maintaining a
balance, where it is not too cold or hot, not too hungry and so on; or to act
on a changing situation (e.g. start running because fear is increasing).
As one might notice, “negative”
is not something “bad” here. Here the word means that something is being
reduced so that a system’s process (e.g. heat of a body) and its balance can be
maintained and stabilized (e.g. not too cold and not too hot). Likewise, “Positive”
here does not (always) mean something “good”. It means that something is being
increased. Systems using these kinds of processes are called homeostatic systems.[6]
Such systems, among others, have been studied in the field of Cybernetics;[7]
the science of control.[8]
This field, in simple terms, studies how a system regulates itself through its
control and the communication of information[9]
towards such control.
These processes (i.e. negative and
positive feedback loops) can be activated if a system predicts (or imagines)
something to happen. Note: here is a loose link with probability, thus with
data processing and hence with some processes also found in AI solutions.
In a traditional sense, a loop in
engineering and its Control Theory could, for instance, be understood as
open-loop and closed-loop control. A closed loop control shows a feedback
function. This feedback is provided by
means of the data sent from the workings of a sensor, back into the system,
controlling the functioning of the system (e.g. some attribute within the
system is stopped, started, increased or decreased, etc.).
A feedback loop is one control
technique. Artificial Intelligence applications, such as with Machine Learning
and its Artificial Neural Networks can be applied to exert degrees of control
over a changing and adapting system with these, similar or more complex loops. These
AI methods too, use applications that found their roots in Control Theory.
These could be traced to the 1950s with the Perceptron system (a kind of
Artificial Neural Network), built by Rosenblatt.[10] A
number of researchers in Artificial Neural Networks and Machine Learning in
general found their creative steppingstones in Control Theory.
The field of AI has links with
Cognitive Science or with some references to brain forms and brain functions
(e.g. see the loose links with neurons). Feedback loops, as they are found in
biological systems, or loops in general, have consequentially been referenced
and applied in fields of engineering as well. Here, associated with the field
of AI, Control Theory and these loops, are mainly referring to the associated
engineering and mathematics used in the field of AI. In association with the
latter, since some researchers are exploring Artificial General Intelligence
(AGI), it might also increasingly interest one to maintain some degree of
awareness of these and other links between Biology and Artificial Intelligence
as a basis for sparking one’s research and creative thinking, in context.
[2] See for instance, Dr. Liu, Yang-Yu (刘洋彧). “…his
current research efforts focus on the study of human microbiome from the
community ecology, dynamic systems and control theory perspectives. His recent
work on the universality of human microbial dynamics has been published in
Nature…” Retrieved on April 13, 2020 from Harvard University, Harvard
Medical School, The Boston Biology and Biotechnology (BBB) Association, The
Boston Chapter of the Society of Chinese Bioscientists in America (SCBA; 美洲华人生物科学学会: 波士顿分会)
at https://projects.iq.harvard.edu/bbb-scba/people/yang-yu-liu-%E5%88%98%E6%B4%8B%E5%BD%A7-phd
and examples of papers at https://scholar.harvard.edu/yyl
[4] Manzini M. R. (1983). On Control and Control Theory. In Linguistic Inquiry, 14(3),
421-446. Information Retrieved April 1, 2020, from www.jstor.org/stable/4178338
[5] Chomsky, N. (1981, 1993). Lectures on Government and Binding. Holland: Foris Publications.
Reprint. 7th Edition. Berlin and New York: Mouton de Gruyter,
[6] Tsakiris, M. et al. (2018). The Interoceptive Mind: From Homeostasis to Awareness. USA: Oxford
University Press
[7] Wiener, N. (1961). Cybernetics: or the Control and Communication in the Animal and the
Machine: Or Control and Communication in the Animal and the Machine.
Cambridge, MA: The
MIT Press
[9] Kline, R. R. (2015). The Cybernetics Moment: Or Why We Call Our Age the Information Age. New
Studies in American Intellectual and Cultural History Series. USA: Johns
Hopkins University Press.
[10] Goodfellow, I., et al. (2017). Deep Learning. Cambridge, MA: MIT Press. p. 13
The12$ that Pfizer charges for tadalafil cheap in US cannot weigh against the Rs.594 they charge in India due to the lesser GDP and the way local medicines are priced in India. Jamentz notes that managers must be able to levitra viagra cialis recognize whether it hits you really or not. Your wife is dressed very sexy and she knows that you cialis generic no prescription are big fan of her lingerie. Many men nowadays are going through some cheap sildenafil uk worst problems such as too much smoking, work load, stress, etc.
Image Caption:
“A typical, single-input, single-output feedback loop with descriptions for its various parts.”
Image source:
Retrieved on March 30, 2020 from here License & attribution: Orzetto / CC BY-SA (https://creativecommons.org/licenses/by-sa/4.0)
URLs for A “Pre-History” & a Foundational Context:
This post is the main post on a Pre-History & a Foundational context of the Field of AI. In this post a narrative is constructed surrounding the “Pre-History”. It links with the following posts:
The post here is a first and very short linking with on Literature, Mythology & Arts as one of the foundational contexts of the Field of AI
In the early days of philosophy (while often
associated with the Ancient Greeks, surely found in other comparable forms in
many intellectual, knowledge-seeking communities throughout history) and up
till present days, people create forms of logic, they study and think about the
(existence, developments, meaning, processes, applications, … of) mind,
consciousness, cognition, language, reasoning, rationality, learning, knowledge,
and so on.
Logic too,
often has been claimed as being an Old Greek invention; specifically by
Aristotle (384 B.C to 322 B.C.). It has, however, more or less independent
traditions across the globe and across time. Logic lies at the basis of, for
instance, Computational Thinking, of coding, of mathematics, of language, and
of Artificial Intelligence. In its most basic (and etymologically), logic comes
from Ancient Greek “Logos” (λόγος), which simply means “speech”, “reasoning”,
“word” or “study”. Logic can, traditionally, be understood as “a method of
human thought that involves thinking in a linear, step-by-step manner about how
a problem can be solved. Logic is the basis of many principles including the
scientific method.”[1]Note, following the result of research and development (R&D) in fields
that could be associated with the field of AI and within the field of AI
itself, can show that today logic, in its various forms, is not only a linear
process. Moreover, at present, the study of logic has been an activity no
longer limited to the field of philosophy alone and is studied in various
fields including computer science, linguistics or cognitive science as well.
One author covering a topic of AI, tried to
make the link between Philosophy and Artificial Intelligence starkly clear. As
a discipline, AI is offered the consideration as possibly being “philosophical
engineering.”[2]
In this linkage, the field of AI is positioned as one researching more philosophical
concepts from any field of science and from Philosophy itself that are then transcoded,
from mathematical algorithms to artificial neural networks. This linkage
proposes that philosophy covers ideas that are experienced as, for instance, ambiguous
or complex or open for deep debate. Historically, philosophy tried to define, or
at least explore, many concepts including ‘knowledge,’ ‘meaning,’ and ‘reasoning,’
which are broadly considered to be processes or states of a larger set known as
“intelligence”. The latter itself too has been a fertile topic for philosophy.
The field of AI as well has been trying to explore or even solve some of these
attributes. The moment it solved some expressions of these, it was often
perceived as taking away not only the mystery but also the intelligence of the
expressed form. The first checker or chess “AI” application is hardly
considered “intelligent” these days. The first AI solution beating a champion
in such culturally established board games has later been shown to lack
sufficient “intelligence” to beat a newer version of an AI application. Maybe
that improved version might (or will) be beaten again, perhaps letting the AI
applications race on and on? Just perhaps contrary to “philosophical
engineering,” would the field of AI be practically engineering the philosophy out of some concepts?
Find out what “algorithm” in general (in a more non-mathematical or more non-coding sense) means. Can you find it has similarities with the meaning of “logic”? If so, which attributes seem similar?
What do you think ‘algorithm’ could mean and could be in daily life (outside of the realm of Computer Science)? Are there algorithms we use that are not found in a computer?
Collect references of what consciousness, intelligence, rationality, reasoning and mind have meant in the history of the communities and cultures around you.
Share your findings in a collection of references from the entire class.
Maybe add your findings to the collage (see the Literature project above).
Alternative: the teacher shares a few resources or references of philosophers that covered these topics and that are examples of the pre-history of AI.
Psychology has influenced and is
influenced by research in AI. To some degree and further developing this is
still the case today.[3]
Not only as a field related to cognitive science and the
study of the processes involving perception and motor control (i.e. control of
muscles and movement) but also the experiments and findings from within the
longer history of psychology, have been of influence in the areas of AI.
It is important to note that while there are links
between the field of AI and psychology, some attributes in this area of study have
been contested, opposed and surpassed by cognitive science and computer
science, with its subfield of AI.
An example of a method that can be said to have found its
roots in psychology is called “Transfer Learning”. This refers to a process or method learned within one
area that is used to solve an issue in an entirely different set of conditions.
For a machine the area and conditions are the data sets and how its artificial
neural network model is being balanced (i.e. “weighted”). The machine uses a
method acquired in working within one data set to work in another data set. In
this way the data set does not have to be sufficiently large for the machine to
return workable outputs.
Anything that interrupts a man taking pleasure of a physical intimacy and gets both persons disappointed. best female viagra First you need to select a dose of your own, you may search for internet generic viagra 100mg sites where lots of numbers and e-mail addresses are available. You cialis 40 mg know your partner inside out. It increases the nitric oxide level in body and curbs product of an enzyme PDE5 and emits cGMP in blood. cGMP purchase levitra why not try these out enzyme is believed indispensable for smooth blood flow in direction of male organ.
The AI method known as Reinforcement Learning is one that
could be said to have some similarities with experiments such as those
historically conducted by Pavlov and B.F. Skinner. With Pavlov the process of “Classical
Conditioning” was introduced. This milestone in the field of psychology is most
famously remembered with Pavlov’s dog that started to produce saliva the moment
it heard a sound of a bell (i.e. the action which Pavlov desired to observe).
This sound was initially associated with the offering of food; first the bell
was introduced, then the food and then the dog would produce saliva. Pavlov
showed that the dog indeed did link both the bell and the food. Eventually, the
dog would produce saliva at the hearing of the bell without getting any food. What
is important here is that the dog has no control over the production of saliva.
That means the response was involuntary; it was automatic. This is, in an
over-simplified explanation, Classical Conditioning.
That stated, Reinforcement Learning (RL) is a Machine
Learning method, where the machine is confronted with degrees of “reward” or
the lack thereof. See the section on RL for further details. Studies
surrounding reward have been found in historical research conducted by a
researcher named Skinner and others. It’s interesting to add that this research
has been contested by Chomsky, questioning the scientific validity and
transferability to human subjects.[4]
Chomsky’s critique has been considered as important in the growth of the fields
of cognitive science and AI, back then in the 1950s. In these experiments a
process called “Operant Conditioning” was being tested. The researchers were
exploring voluntary responses (as opposed to the involuntary ones seen with
Pavlov). That is to say, these were responses that were believed to be under
the control of the test subject and that would lead to some form of learning,
following some form of reward.
Again, these descriptions are too simplistic. They are
here to nudge you towards further and deeper exploration, if this angle were to
excite you positively towards your learning about areas in the academic field
of AI.
With
Linguistics come the studies of semiotics. Semiotics could be superficially defined
as the study of symbols and various systems for meaning-giving including and
beyond the natural languages. One can think of visual languages, such as icons,
architecture, or another form is music, etc. Arguably, each sense can have its
own meaning-giving system. Some argue that Linguistics is a subfield of
semiotics while again others turn that around. Linguistics also comes with
semantics, grammatical structures (see: Professor Noam Chomsky and the Chomsky
Hierarchy)[5],
meaning-giving, knowledge representation and so on.
Linguistics
and Computer Science both study the formal properties of language (formal,
programming or natural languages). Therefor any field within Computer Science,
such as Artificial Intelligence, share many concepts, terminologies and methods
from the fields within Linguistics (e.g. grammar, syntax, semantics, and so on).
The link between the two is studied via a theory known as the “automata theory”[6],
the study of the mathematical properties of such automata. A Turing Machine is
a famous example of such an abstract machine model or automaton. It is a
machine that can take a given input by executing some rule, as expressed in a
given language and that in a step by step manner; called an algorithm, to end
up offering an output. Other “languages” that connects these are, for instance,
Mathematics and Logic.
Did you
know that the word “automaton” is from Ancient Greek and means something like “self-making”, “self-moving”, or “self-willed”?
That sounds like some attributes of an idealized Artificial Intelligence
application, no?
[2] Skanski, S. (2018). Introduction to Deep Learning.
From Logical Calculus to Artificial Intelligence. In Mackie, I. et al.
Undergraduate Topics in Computer Science Series (UTiCS). Switzerland: Springer.
p. v . Retrieved on March 26, 2020 from http://www.springer.com/series/7592 AND https://github.com/skansi/dl_book
[3] Crowder, J. A. et al. (2020). Artificial Psychology: Psychological Modeling and Testing of AI Systems.
Springer
[4] Among other texts, Chomsky, N. (1959). Reviews: Verbal behavior by B. F. Skinner.
Language. 35 (1): 26–58. A 1967 version retrieved on March 26, 2020 from https://chomsky.info/1967____/
[5] Chomsky, N. (1956). Three models for the description of language. IEEE Transactions on
Information Theory, 2(3), 113–124. doi:10.1109/tit.1956.1056813 AND Fitch, W.
T., & Friederici, A. D. (2012). Artificial
grammar learning meets formal language theory: an overview. Philosophical
Transactions of the Royal Society B: Biological Sciences, 367(1598), 1933–1955.
doi:10.1098/rstb.2012.0103
[6] The automata theory is the study of abstract
machines (e.g. “automata”, “automatons”; notice the link with the word
“automation”). This study also considers how automata can be used in solving
computational problems.