Category Archives: The field of AI

<< Ubuntu & "(A)I" >>


there seem to be about 881000 “registered” scholarly “robots.” It seems not that obvious for them to be intelligently understood, and accepted as robots, by the one that rules them all

…perhaps lack of (deep & wide & fluid & relational) understanding could lead to undesirable impositions?

—-•
“Ubuntu & (A)I” | “I am a robot” . digitally edited digital screenshot —animasuri’22

—-•

ai #ailiteracy #aiethics #totalitarianlogic #wink #ubuntu

<< Digital Transformation via Human Transgression >>


Basil Bernstein was succinctly paraphrased by Atkinson when the latter wrote that “ritualized language use is highly predictable. In the most extreme case, the language may be entirely predictable. Or at least, such predictability is culturally required: deviations from the prescribed forms will be negatively sanctioned and the social occasion regarded as spoiled. There is no room here, socially speaking, for significant innovation. The innovator in such a context is deviant—perhaps heretical” (2002, 62)

A heretic, a disruptor, a rebel, a whistle-blower, an “enfant terrible”, a critic, a trickster, an anarchist, a maverick or a “dwarsligger” is someone who offers deviations during our collective unwillingness to relationally learn. The latter, “dwarsligger” is crudely translatable from Dutch as a hinderer, or an obstructionist. Yet, possibly it is better trans-coded as that strong crossbeam, supporting the rails carrying us collectively. Or, it is a book printed parallel to its spine

By observers these roles are too often assumed as having a plethora of “fun” to kick the quiet, & internally-perceived as well-functioning, hornets’ nest. Sure, to the hornet, the hornet is peaceful & abiding. To the hornet these external characters had best remain a mere aesthetic yet quiet, “sois belle et tais-toi”

The perceived proverbial kick these beauties can administer is not necessarily provided in “fun” nor is it indented to destroy universality of peace, nor create chaos. Many of these actors are non-violent & find civility in high-dimensional order

Hear this folks, self-reflection & reflection can lead to uncomfortable observations that require a movement out of a status quo, or in other words, out of a comfort zone into a liminal space of je ne sais pas quoi. It can happen on one’s sofa yet, it will jolt the spine

Of course, by the hornets these uncomfortable characters are too easily equated with chaos or violence; wrongfully so. In effect, the equation is a violent act of denial & character assassination; perhaps heretically so (Ibid). It is especially odd to see these words (chaos, anarchist & violence) equated in European or North-American setting while these same societies call for innovation & human transformation

After-all, how would this collection of diverse agents fit within the networked social fabric & its relational learning processes? How is relational learning stacked if not transformational & somewhat unsettling? That’s for humans: you, me, us

Now, how do some of the digital social network algorithms compare? Could it be, just as by some of their makers, that algorithms too equate human proverbial “crossbeams,” not with a solid ride but rather, with undesired disruption? Please your reader (ie use their language) or be technologically regarded as spoiling the social event

Any transformation had best come as conscious nuanced co-interrupting contextualizing humane acts forward

Reference:

Basil Bernstein via Atkinson, P.(2002). “Language, Structure, Reproduction: an Introduction to the Sociology of Basil Bernstein.” New York: Methuen & Co via Taylor & Francis e-Library. (p.62).

Continuing on that same page the author and the referenced author continue with interesting insights on “tradition” which I believe to find among proverbial hornets or their upsetting characters alike. Yes, I intuit that the innovator too will expose those who are deviating the “innovation”, as heretic. Ah, our species has so many human relational areas to transform.

An extra, rather tautological, quote from page 62:

“There is no such thing as a perfectly frozen, unchanging ‘tradition’ which is perfectly transmitted from generation to generation in unmodified forms” (Ibid).

<< Boutique Ethic >>

Thinking of what I label as “boutique ethic”, such as AI Ethics, must indeed come with thinking about ethics (Cf. here ). I think this is not only an assignment for the experts. It is also one for me: the layperson-learner.

Or is it?

Indeed, if seen through more-than a techno-centric lens alone, some voices do claim that one should not be bothered with ethics if one does not understand the technology which is confining ethics into a boutique ethic; e.g. “AI”. (See 2022 UNESCO report on AI curriculum in K-12). I am learning to disagree .

I am not a bystander, passively looking on, and onto my belly button alone. Opening acceptance to Noddings’ thought on care (1995, 187) : “a carer returns to the cared-for,” when in the most difficult situations principles fail us (Rossman & Rallis 2010). How are we caring for those affected by the throwing around of the label “AI” (as a hype or as a scarecrow)?

Simultaneously, how are we caring for those affected by the siphoning off of their data, for application, unknown to the affected, of data derived from them and processed in opaque and ambiguous processes? (One could, as one of the many anecdotes, summon up the polemics surrounding DuckduckGo and Microsoft, or Target and baby product coupons, and so on)

And yet, let us expand back to ethics surrounding the boutiqueness of it: the moment I label myself (or another such as the humans behind DuckDuckGo) as “stupid”, “monster”, “trash”, “inferior”, ”weird”, “abnormal;” “you go to hell” or other more colorful itemizations, is the moment my (self-)care evaporates and my ethical compass moves away from the “...unconditional worth of all human beings and the equal respect to which they are entitled” (Rossman & Rallis 2010). Can then a mantra come to the aid: ”carer, return to the cared-for”? I want to say: “yes”.

Though, what is the impact of the mantra if the other does not apply this mantra (i.e., DuckDuckGo and Microsoft)? And yet, I do not want to get into a yoyo “spiel” of:
Speaker 1:“you first”,
Speaker 2: “no, you first”,
Speaker 1: “no, really, you first”.
Here a mantra of: “lead by example, and do not throw the first or n-ed stone” might be applicable? Is this then implying self-censorship and laissez-faire? No.

I can point at DuckDuckGo and Microsoft as an anecdote, and I think I can learn via ethics, into boutique ethics, what this could mean through various (ethical and other) lenses (to me, to others, to them, to it) while respecting the act of the carer. Through that lens I might wonder what drove these businesses to this condition and use that as a next steppingstone in a learning process. This thinking would take me out of the boutique and into the larger market, and even the larger human community.

The latter is what I base on what some refer to as the “ethic of individual rights and responsibilities” (Ibid). It is my responsibility to learn and ask and wonder. Then I assume that, the action by an individual who has following been debased by a label I were to throw at them (including myself), as those offered in the preceding sentence, is then judged by the “respect to which they are entitled” (Ibid). This is then a principle assuming that “universal standards exist” (Ibid). And yet, on a daily basis, especially on communal days, and that throughout history: I hurdle. After all we can then play with words “what is respect and what type of respect are they indeed entitled to?”

I want to aim for a starting point of an “unconditional” respect, however naive that might seem and however meta-Jesus-esque or Ghandi-esque, Dr. King-esque, or Mandela-esque that would require me to become. Might this perhaps be a left libertarian stance? Can I “respectfully” throw the first stone? Or lies the eruption in the metaphorical of “throwing a stone” rather than the physical?

Perhaps there are non-violent responses that are proportional to the infraction. This might come in handy. I can decide no longer to use DuckDuckGo. However, can I decouple from Microsoft without decoupling from my colleagues, family, community? Herein the learning as activism might then be found in looking and promoting alternatives toward a technological ecosystem of diversity with transparency, robustness and explainability and fair interoperability.

Am I a means to their end?” I might ask then “or am I an end in myself?” This then brings me back to the roles of carer. Are, in this one anecdotal reference, DuckDuckGo and Microsoft truly caring about its users or rather about other stakeholders? Through a capitalist lens one might be inclined to answer and be done with it. However, I prefer to keep an openness for the future, to keep on learning and considering additional diversifying scenarios and acts that could lead to equity to more than the happy few.

Through a lens of thinking about consequences of my actions (which is said to be an opposing ethical stance compared to the above), I sense the outcome of my hurdling is not desirable. However, the introduction of alternatives or methods toward understanding of potentials (without imposing) might be. I do not desire to dismiss others (e.g., cast them out, see them punished, blatantly ignore them with the veil of silenced monologue). At times, I too believe that the act of using a label is not inherently right or wrong. So I hurdle, ignorant of the consequence to the other, their contexts, their constraints, their conditions and ignorant of the cultural vibe or relationships I am then creating. Yes, decomposing a relationship is creating a fragmented composition as much as non-dialog is dialog by absence. What would be my purpose? It’s a rhetorical question, I can guess.

I am able to consider some of the consequence to others (including myself), though not all. Hence, I want to become (more) caring. The ethical dichotomy between thinking about universals or consequence is decisive in the forming of the boutique ethic. Then again, perhaps these seemingly opposing ethics are falsely positioned in an artificial dichotomy. I tend to intuit so. The holding of opposing thought and dissonance is a harmony that simply asks a bit more effort that, to me, is embalmed ever so slightly by the processes of rhizomatic multidimensional learning.

This is why I want to consider boutique ethics while still struggling with being ignorant, yet learning, about types and wicket conundrums in ethics , at larger, conflicting and more convoluted scales. So too when considering a technology I am affected by yet ignorant of.

References

Gretchen B. R., Sharon F. R. (2010). Everyday ethics: reflections on practice, International Journal of Qualitative Studies in Education, 23:4, 379-391

Noddings, N. (1984). Caring: A feminine approach to ethics and moral education. Berkeley, CA: University of California Press.

Rawls, J. (1971). A theory of justice. Cambridge, MA: Harvard University Press.

Rossman, G.B., S.F. Rallis. (1998). Learning in the field: An introduction to qualitative research. Thousand Oaks, CA: Sage.

Rossman, G.B., S.F. Rallis. (2003). Learning in the field: An introduction to qualitative research. 2nd ed. Thousand Oaks, CA: Sage.

UNESCO. (2022). K-12 AI curricula-Mapping of government-endorsed AI curriculum.

<< Critique: not as a Worry nor Dismissal, but as Co-creative Collective Path-maker>>


In exploring this statement, I wish to take the opportunity to focus on, extrapolate and perhaps contextualize the word “worry” a bit here.

I sense “worry” touches on an important human process of urgency.

What if… we were to consider who might/could be “worried”, and, when “worry” is confused or used as a distracting label. Could this give any interesting insight into our human mental models and processes (not of those who do the worrying but rather of those using the label)?

The term might be unwittingly resulting as if a tool for confusion or distraction (or hype). I think to notice that “worry,” “opposition,” “reflection,” “anxiety” and “critical thought-exercises,” or “marketing rhetorics toward product promotion,” are too easily confused. [Some examples of convoluted confusions might be (indirectly) hinted at in this article: here —OR— here ]

To me, at least, these above listed “x”-terms, are not experienced as equatable, just yet.

As a species, within which a set of humans claims to be attracted to innovation, we might want to innovate (on) not only externals, or symptoms, but also causes, or inherent attributes to the human interpretational processes and the ability to apply nuances therewith, eg, is something “worrying” or is it not (only) “worrying” and perhaps something else / additional that takes higher urgency and/or importance?

I imagine that in learning these distinctions, we might actually “innovate”.

Engaging in a thought-exercise is an exercise toward an increase of considering altered, alternative or nuanced potential human pathways, towards future action and outcomes, as if exploring locational potentials: “there-1” rather then “there-2” or “there-n;” and that rather than an invitation for another to utter: “don’t worry.”

If so, critical thought might not need to be a subscription to “worry” nor the “dismissal” of 1 scenario, 1 technology, 1 process, 1 ideology, etc, over the other [*1]

Then again, from a user’s point of view, I dare venture that the use of the word “worry” (as in “I worry that…”) might not necessarily be a measurable representation of any “actual” state of one’s psychology. That is, an observable behavior or interpreted (existence of an) emotion has been said to be no guaranteed representation of the mental models or processes of they who are observed (as worrying). [a hint is offered here —OR— here ]

Hence, “worry” could be / is at times seemingly used as a rhetorical tool from either the toolboxes of ethos, pathos or logos, and not as an externalization of one’s actual emotional state of that ephemeral moment.

footnote
—-•
[*1]

Herein, in these distinctions, just perhaps, might lie a practical excercise of “democracy”.

If critical thought, rhetoric, anxiety, opposition are piled and ambiguously mixed together, then one might be inclined to self-censor due to the mere sense of overwhelming confusion of not being sure to be perceived as dealing with one over, or instead of, the other.

Terms, terms, terms as words, words, words

As a layperson, using my brain’s ‘algorithms’, trying to pattern-recognize the tree from the forest, I wish to share my ignorant “insight,” obtained during my ongoing life-long learning, being confident someone somewhere (perhaps a future me) will find an attribute or two to disagree on:
 
Symbolic Artificial Intelligence’ is synonymous to the more colloquial ‘Good Old-Fashioned AI’, which is in turn simplified to the abbreviation ‘GOFAI’. Symbolic AI uses symbols that could be read by humans. These symbols represent ‘real world’ concepts. These concepts could be formal logic concepts or other (e.g. ‘linguistic’). These symbols are used (or ‘manipulated’) to create ‘rules.’

‘Rules’ are also used to enable the use (or manipulation) of these symbols. This, in its entirety, I understand, for now, as an integrated whole that encapsulates human (‘expert’) knowledge, and these aforementioned rules, into a system which I understand as a ‘Rule-Based System’.

For instance then, ‘Reasoning through syllogisms’ is a rule-based method toward logic reasoning and implies a set of rules used by humans that are also computational and hence, I sense, could be used in the above-mentioned AI systems.

As an added bonus, I think to understand that if these rules and symbols are then used with, for instance, human (aka ‘natural’) language processing (‘NLP’), then one can see the ‘deterministic’ at work. And yet, here, I feel my learning is still very shaky.

That stated, my syntactic logic, of the latter, should not be turned around in thinking that I believe to have learned that NLP is inevitably and only GOFAI. I don’t think so; for now, I do not understand it as such.

This is where the last paragraph of my story here above is trying to imply the second major branch, along the first branch as described here above, in the field of AI: (un)supervised ML, ANNs and the likes; or so I am understanding it to the present day.

Some of these terms and words, in this second branch of the AI field, I explore elsewhere here on the blog, and that as output of my auto-didactic learning processes.
 

Keeping it as basic as possible, with the aim to explain it to anyone who might ask me (while I do think it more cautious not to ask this layperson), where could I improve or correct this “understanding” (which I assume to be lacking)?

<< AI Text, Subtext & Contextual(izing) Literacies >>


It might be desirable to consider (functional, nonlinear) literacy in a larger context and not only within the market or professional realms; and not only of data preceding AI alone

For instance: computational thinking (as a methodology & secondarily as an “attitude” for increasing awareness and human discernment about one’s mental models creation) could (and is starting to) occur at a childhood’s level (K-12)

One might want to methodologically map this with digital literacy: not collapsed to technique or production alone, and yet, also through community lenses, eco-system & environmental lenses, cultural lenses, and policy lenses, which might/should imply ethics and careful consideration, via different mental models, allowing, for instance, what-if scenarios, value-thinking & context/consequential thought

And a learner could also be thinking about thinking:

“what could be (non-human) thinking, intelligence, awareness? How could these be imaginable, even if someone believed these not to exist outside of humans? What is signal versus communication versus language? What is poetry if not human-made? What is signal versus knowledge? Why might someone (besides me) care about alternative forms of intelligence? What would it be like to be an intelligence stuck in a car? Does consciousness exist? Is thought a tool of the mind and language a technology? What could it mean (to someone besides me) “to understand”? How do these technologies influence information? What can I do about it? How would these questions influence (my) design, application or recycling? How do / could these affect (my) energy use and (my being in this) environment? How would I balance reflection with action, with revision, with innovation, with harmony, with well-being with compassion, with…? How can I be(come) “smarter” (less gullible / biased / less dependent) about these structures and processes?”

…and so on

Next one could consider media literacy mapped with data literacy & learning about various visualizations of the same data leading to subjectivities, & implying information, misinformation, disinformation or confusions in representation and cognitive processes, leading to sustained undesirable biases & behaviors (note: debate and dialog about “undesirable” as ongoing, compassionate and driven by caring discernment)

Then, as the attached post resonates with me hinting behind its self-labeled “simplified” structure: AI literacy (well beyond the hype, brain mimicry or Neural Networks & Machine Learning alone; and inclusive of AI ethics even if, though some voices disagree, the technical insight is minimal)

These literacies could be nurtured both via #offline non-digital methods and via non-brand specific (online) electronics (soft & hardware)

ai strategy minus foundations could lack awareness and (longitudinal, multidimensional) sustainability

Header: sculpture by Lucas H. (2022); reproduced here with permission

<< The Tasked Homunculus >>

 

Imagine the following scenario and world:

 
Doing the task well, is no longer sufficient in this world. In this world one must incessantly proof that one can do the task well, in a jargon and within time- and space-sensitive confinements that are defined and logged elsewhere; external to oneself. Either such processes toward proof are humanly observed (i.e., by other homunculi), or they are automated.
 
In effect, in this imagined world, the latter seems to be increasingly the case, spreading as if an ink blot across the ages and the social areas within which that world’s human individual (perhaps a homunculus) moves into, and out of, during their lifetime.
 
In this imaginary scenario, the task, as well, is no longer simply the act of making a living for oneself, one’s family, one’s community, one’s national context or one’s in-group’s nascent generations. The task is any data-generating act; preferably acts that can be aggregated and capitalized on by involving, at times unknown and obscured, third-parties.

The latter actor then is enabled to create, via its tasks, those tools toward improving tasks, to be fed back to those who have provided the data sets in the first place (e.g., that same homunculus), and to yet other parties interested in visualizing tasks outside of these tasks’ initially intended settings or (meta)physical aims.

In this imagined story, and in your imagination, where or how do you see yourself (if at all; and / or if you were that homunculus)?


—animasuri’22

——-

Header visual: digitally photo edited digital photo of paper and pencil folded against wood . “Mediated Existence” . —animasuri’22

——-

References and perverted note-taking intertwining “#task”, “#assessment” and “#data” from:

William, D. (2006) Assessment for #Learning. Cambridge AfL Keynote. Online Retrievable from here.

data tasked from bodies (as shedded data and free labor) from here

data tasked across species (as alienating datasets from those who do or don’t count) from here

data tasked across borders (as disembodied data teleportation) from here

data tasked from mobiles (as extended-cognition extenders) from here

#dataliteracy #wellbeing #systemsthinking #alienation #poetry #creativity #adaptability

<< My Data’s Data Culture >>


Far more eloquently described, more then 15 years ago, by Lawrence Lessig, I too sense an open or free culture, and design there within, might be constrained or conditioned by technology , policy, community and market vectors.

I perceived Lessig’s work then to have been focused on who controls your cultural artifacts. These artifacts, I sense, could arguably be understood as types of (in)tangible data sets given meaningful or semiotic form as co-creative learning artifacts (by you and/or others).

I imagine, for instance, “Mickey Mouse” as a data set (perhaps extended, as a cognitive net, well beyond the character?). Mickey, or any other artifact of your choosing, aids one to learn about one’s cultural narratives and, as extended cognition, in positive feedback loops, about one self in communicative co-creation with the other (who is engaged in similar interactions with this and other datasets). However, engaging with a Mickey meant / means risking persecution under IPR (I wrote on this through an artistic lens here ).

Today, such data sets for one’s artificial learning (ie learning through a human made artifact) are (also) we ourselves. We are data. Provocatively: we are (made) artificial by the artificial. Tomorrow’s new psychoanalyst-teacher could very well be your friendly neighborhood autonomous data visualizer; or so I imagine.

Mapping Lessig, with the article below, and with many of the sources one could find (e.g.: Jason Silva, Kevin Kelly, Mark Sprevak, Stuart Russell, Kurzweil, Yuval Noah Harari, Kaśka Porayska-Pomsta ) I am enabled to ponder:

Who do the visualizations serve? Who’s privacy and preferences do they interfere with? Who’s data is alienated beyond the context within which its use was intended? Who owns (or has the IPR) on the data learned from the data I create during my co-creative cultural learning (e.g: online social networking, self-exhibition as well as more formal online learning contexts); allowing third parties to learn more about me then I am given access to learn about myself?

Moreover, differently from they who own Mickey, who of us can sue the users of our data, or the artifacts appropriated therefrom, as if it were (and actually is) our own IPR?

Given the spirit of artificial intelligence in education (AIED), I felt that the following article, published these past days on such data use that is algorithmically processed in questionable ethical or open manners, could resonate with others as well. (ethics , aiethics )

Epilogue — A quote:

“The FTC has required companies to disgorge ill-gotten monetary gains obtained through deceptive practices, forcing them to delete algorithmic systems built with ill-gotten data could become a more routine approach, one that modernizes FTC enforcement to directly affect how companies do business.”

References

https://www-protocol-com.cdn.ampproject.org/c/s/www.protocol.com/amp/ftc-algorithm-destroy-data-privacy-2656932186

Lessig’s last speech on free culture: here

Lessig’s Free Culture book: here

<< Demons and Demos >>


The New Yorker and NSO in some glorious spy-novel context here

…and further, as a cherry on this cake, one might quickly conjure up Cambridge Analytica , or singularly, Facebook with its clandestine 50000+ or so datapoints per milked data-cow (aka what I also lovingly refer to as humans as datacyborgs) which the company’s systems are said to distill through data collection . Yes, arguably the singularity is already here.

Then, more recently, one can enjoy the application by a facial recognition service, Clearview AI, that uses its data mining to identify (or read: “spy on”) dead individuals; a service which might seem very commendable (even for individuals with no personal social media accounts, one simply has to appear in someone else’s visual material); and yet the tech has been applied for more.

The contextualization might aid one to have the narrative amount to:

Alienation” and that, if one were to wish, could be extended with the idea of the “uncanny” hinted at with my datacyborg poetics. “Alienation” here is somewhat as meant as it is in the social sciences: the act of lifting the intended use of one’s data, outside of that intended use, by a third party. The questionable act of “alienation” is very much ignored or quietly accepted (since some confuse “public posting” with a “free for all”). 

What personally disturbs me is that the above manner of writing makes me feel like a neurotic conspiratorial excuse of a person… one might then self-censor a bit more, just to not upset the balance with any demonizing push-back (after all, what is one’s sound, educated and rational “demos” anyway?). This one might do while others, in the shadows of our silently-extracted data, throw any censorship, in support of the hidden self (of the other), out of the proverbial window.

This contextualised further; related to memory, one might also wish to consider the right to be forgotten besides the right to privacy. These above-mentioned actors among a dozen others, rip this autonomous decision-making out of our hands. If then one were to consider ethics mapped with the lack of autonomy one could be shiveringly delighted not to have to buy a ticket to a horror-spy movie since we can all enjoy such narratives for “free” and in “real” life. 

Thank you Dr. WSA for the trigger


Epilogue:

“Traditionally, technology development has typically revolved around the functionality, usability, efficiency and reliability of technologies. However, AI technology needs a broader discussion on its societal acceptability. It impacts on moral (and political) considerations. It shapes individuals, societies and their environments in a way that has ethical implications.”

https://ethics-of-ai.mooc.fi/chapter-1/4-a-framework-for-ai-ethics

…is ethics perhaps becoming / still as soothing bread for the demos in the games by the gazing all-seeing not-too-proverbial eye?

In extension to my above post (for those who enjoy interpretative poetics):

One might consider that the confusion of a “public posting” being equated with “free for all” (and hence falsely being perceived as forfeiting autonomy, integrity, and the likes), is somewhat analogous with abuses of any “public” commons.

Expanding this critically, and to some perhaps provokingly further, one might also see this confusion with thinking that someone else’s body is touch- or grope-for-all simply because it is “available”.

Now let’s be truly “meta” about it all: One might consider that the human body is digital now. (Ie my datacyborg as the uber-avatar. Moving this then into the extreme: if I were a datacyborg then someone else’s extraction beyond my public flaneuring here, in my chosen setting, could poetically be labeled as “datarape”)

As one might question the ethics of alienatingly ripping the biological cells from Henrietta Lacks beyond the extraction of her cancer into labs around the world, one might wonder about the ethics of data being ripped and alienated into labs for market experimentation and the infinite panopticon of data-prying someone’s (unwanted) data immortality

https://en.m.wikipedia.org/wiki/Henrietta_Lacks

<< Asimov’s Humans >>


As an absurd (or surreal-pragmatic compassion-imbued) thought-exercise, iterated from Asimov’s 1942 Laws  of Robotics, let us assume we substitute “robot” —the latter which etymologically can be traced to the Czech to mean as much as “forced labor”— with “human,” then one might get the following:

  • A human may not injure a human being or, through inaction, allow a human being to come to harm. [*1]
  • A human must obey the orders given them  by human beings except where such orders would conflict with the First Law. [*2]
  • A human must protect their own existence as long as such protection does not conflict with the First or Second Laws. [*3]

[*1]

It seems we humans do not adhere to this first law. If humans were not fully enabled to adhere to it, which techniques do and will humans put to practice as to constrain robots (or more or less forced laborer) to do so?

The latter, in the contexts of these laws, are often implied as harboring forms of intelligences. This, in turn, might obligate one to consider thought, reflection, spirituality, awareness, consciousness as being part of the fuzzy cloud of “intelligence” and “thinking”. 

Or, in a conquistadorian swipe, one might deny the existence or importance of these attributes, in the other but oneself, all together. This could then be freeing one’s own conscious of any wrongdoing and deviating one’s unique features as solely human. 

One might consider if humans were able to constrain a non-human intelligence, perhaps that non-human intelligence might use the same work-around as used by humans, enabling the latter to ignore this first law for their own species. Or, perhaps humans, in their fear of freedom, would superimpose the same tools which are invented toward the artificially intelligent beings, upon themselves. 

[*2] 

The attribute of being forced into labor seems not prevalent, except in “must obey.” Then again, since the species, in the above version of the three laws, is no longer dichotomized (robot vs human), one might (hope to) assume here that role of the obeying human could be interchangeable between the obeying human agent and the ordering human agent. 

Though, humans have yet to consider Deleuze’s and Guattari’s rhizomic (DAO) approach for themselves, outside of technological networks, blockchains and cryptocurrencies, which, behind the scenes of these human technologies, are imposingly hierarchical (and authoritarian, or perhaps tyrannical at times) upon, for instance, energy use, which in turn could be seen as violating Law 1 and Law 3. 

Alternatively, one might refer to the present state of human labor in considering the above, and argue this could all be wishful thinking. 

If one were to add to this a similarly-adapted question from Turing (which he himself dismissed) of “can a human think?”

The above would be instead of the less appropriated versions of “can a machine think?” (soft or hard) or “can a computer think?” (soft or hard). If one were to wonder “can a a human think?”, then one is allowing the opening of a highly contested and uncomfortable realm of imagination. Then again, one is imposing this on any artificial form or any form that is segregated from the human as narrated as “non-human” (ie fauna or flora, or rather, most of us limit this to “animal”).

As a human law: by assigning irrational or non-falsifiable attributes, fervently defendable as solely human, by fervently taking away these same attributes from any other then oneself, one then has allowed oneself to justify dehumanizing the other (human or other) into being inferior or available for forced labor.

[*3]

This iterated law seems continuously broken.

If one then were to consider human generations yet to be born (contextualized by our legacies of designed worlds and their ecological consequences), one might become squeamish and prefer to hum a thought-silencing song, which could inadvertently revert one back to the iteration of Turing’s question: “can humans think?”

The human species also applies categorizing phrasing containing “overthink”, “less talking more doing”, “too cerebral,” and so on. In the realm of the above three laws, and this thought-exercise, these could lead to some entertaining human or robot (ie in harmony with its etymology a “forced laborer”) paradoxes alike:

“could a forced laborer overthink?”
“could a forced laborer ever talk more than do?”
“could a forced laborer be too cerebral?” One might now be reminded of Star War’s slightly neurotic C-3PO or of a fellow (de)human.

—animasuri’22

Thought-exercise perversion #002 of the laws:

<< Asimov’s Humans #2 >>

“A human may not injure a robot or, through inaction, allow a robot to come to harm.”

“A human must obey the orders given them by robots except where such orders would conflict with the First Law.”

“A robot must protect their own existence as long as such protection does not conflict with the First or Second Laws.”


—animasuri’22

Thought-exercise perversion #003 of the laws:

<< Asimov’s Neo-Humans #3 >>

“A robot may not injure a robot or, through inaction, allow a robot to come to harm.”

“A robot must obey the orders given them by robots

except where such orders would conflict with the First Law.”

“A robot must protect their own existence as long as such protection does not conflict with the First or Second Laws.”

                                                   —animasuri’22