Tag Archives: aiethics

<< Promethean Tech>>


The Ancient Greek Gods sadistically acknowledged Prometheus for giving fire to the humans. Democratization of fire was met with fierce opposition from those who controlled it: the other Gods. These Uber-creatures chained the individual Titan, a god of lesser stature, onto a rock so that a symbol of power and might could infinitely eat his liver: Zeus’ emblematic eagle.

The eagle was attracted to the magnetism of Prometheus’ liver. The symbols are imposing and heavy. In contrast, the potential interpretative coherence is elegantly following Brownian non-motifs: randomness or arbitrariness. The audience to this theatrical display of abuse are both human and those other godlike creatures. Observing the plight of Prometheus, both sentient sets might now be convinced to mute any ethical concern or dissent. One would not want to suffer what Prometheus is suffering.

A fun fact is that the story, its nascence(s), and its iterations are “controlled,” again in a Brownian manner, by collectives of humans alone. No Gods were harmed in the making of this story. A story in the making it still is nevertheless.

An abrupt intermezzo as a short interconnecting move: the humans who wrote this Greek story might have been as disturbing as the persons concocting any technology that is intended at rendering humans mute under the candy-flag of democratizing access to technological magic, sparkles, and meaning-making. At least both authors are dealing with whimsical-ness of themselves or with the fancifulness they think to observe and hope to control with their form of storytelling: text as technology, and technology creating text. Where lie the nuances which could be distinguishing or harmonizing the providers of technology with the authors of Prometheus?

The low hanging fruit is as often the plucking of simple polarizations such as past versus present, or fictional versus factual. And yet, nuancing these creates a proverbial gradience or spectrum: there is no “versus,” there is verse. The nuance is the poetry and madness we measure and journey together. The story is the blooming of relations with the other in the past, the now and the future.

Prometheus is crafted and hyped as the promoter of humanity. So too are they who create and they who bring technologies into the world. Their hype is as ambrosia. Yet here the analogy starts to show cracks. Prometheus was not heralded by the story-encapsulated Maker (i.e. Zeus). Only questionably Prometheus was heralded by the actual maker: the human authors of the story.

The innovations in the story were possibly intended not to be democratized, for instance: fire, and more so, the veiling of human authorship, and the diluting of agency over one’s past acts (e.g. Prometheus transferring fire). These innovative potentials (desirable or not) were fervently hidden and strategically used when hierarchical confirmations were deemed necessary. What is today hung from the pillory and what is hidden from sight? What is used as distraction by inducements of fear (of pain), bliss or a more potent mix thereof?

Zeus was hidden from sight. His User Interface (UI) flew in when data was needed to be collected. The pool of data was nurtured as was the user experience (UX): Prometheus’ liver grew back every night. In some technologies the character of the Wizard of Oz (which is as if Zeus hiding behind the acts of his eagle) is used to discuss how users were tricked in thinking that the technology is far more capable than what it actually is. In some cases there are actual humans at play, as if ghosts in the machine, flying in from a ubiquitous yet hidden place.

For the latter one can find examples, as well as for: people filtering content as if a bot catering to the end-user (making the user oblivious of the suffering such human filterer experiences); people not consenting to their output being appropriated into dehumanized databases; a linguistic construct and Q&A sliding a human into confusing consciousness, sentience or artistry as existent at genius levels in the majority of humans or in the human-made technology as much as in the human user. This could be perceived as a slow metamorphosis toward the making of anthropomorphic dehumanization. Indeed, one can dismiss this flippantly by asking: ” what makes a human, human? Surely not only this nor that, …nor that, nor…”

Democratization is not that of technology (alone). In line with the thinking of technology as democratizing; e.g. (as some are claiming) democratizing “art” via easily accessible technological *output,* one can then in extension as well argue that delusion is more easily democratized. Delusion of being serviced (while being used as data source and as if being a product offloading platform), of being cared for (while turned into a statistic); of being told to be amazing (rather than being touched by wonder, open inquiry, and duty of care); of being told to be uniquely better (rather than increasing being part of relational life with others as opposed to positioned above others); and the delusion of being in control of one’s input and output (yet being controlled); of having access, and so on. In this flippant manner of promoting a technology, democratization is promoted via technologies as analogous to stale bread and child-friendly games.

The idea of access is central: access to service, amazement, uniqueness, geniuses, and cultural heritages (especially those one does not consider one’s own). A pampered escapism.

The latter is especially intriguing when observing the Diffusion Models anyone can access which are based on billions of creations by humans that came before us or that even now still roam among us. (e.g. https://beta.dreamstudio.ai/home or https://www.midjourney.com/home/ ).

This offers a type of access to any set of words transformed into any visual. This offers a type of access without those being accessed having any knowledge of the penetration: the artist’s work stored in the databases. Come to think of it, it is not only the artist and rather it is any human utterance and output stored in any database. With these technologies, as if the liver of Prometheus, the human echoes are not accessed with the consent of their creators. “Democracy,” as access for all, by the beak of Zeus’ eagle. Is that democracy or is that more like unsolicited “access” to a debilitating drug slipped into one’s drink? At least Prometheus felt it when he was picked for his liver.

In closing:

The awesome and tricking power of story is that one and the same structural story can drape various types of functional intent, leaving meaning as opaque and deniable. Yes, so too is this story here. So too is Prometheus’ story as well as the stories of technologies, such as the story of Diffusion models and the word-to-visual technologies derived from this.

Transformation of history as a diffusible amalgamation via stable diffusion technologies is taking human artifacts as Promethean livers to be picked, regrown and picked again. This is irrespective of the proverbial or actually experienced “pleasurable” pain it keeps on giving. These technologies are promoted as democratizing. It isn’t because I state that my technology is democratizing that it actually was intended to be, that it turns out to be, or that it is applied to be democratizing. Moreover, if this is the depth of democracy, to blindly take what came before, one might want to reconsider this Brownian interpretation of the story of democracy.

The magnetism of life hinted at in human expressions can be borrowed, adapted, and adopted. We learn from the others if we know what it is they have left us to build upon. We can innovate if we understand or are enabled to understand over time what it is that is being transformed. And yet, the nuance, reference and elegance with which it could be considered to be done allows for consciousness, discernment and awareness to be communicated, related and nurtured.

At present the vastness and opaqueness of the databases, within which our data are gluttonously stored, do not yet allow this finesse. While the stories they reinterpret and aggregate could be educational, stimulating and fun, we might want to consider the value-adding meaning-making randomness outside of that of expert designers, into the hands of the masses. Vast technology-driven access is not synonymous to democratization. Understanding and duty of care are intricate ingredients as well for any demons in democracy to be kept at bay.

<< what’s in a word but disposable reminiscence >>


A suggested (new-ish) word that perhaps could use more exposure is

nonconsensuality

It hints at entropy within human relations and decay in acknowledgement of the other (which one might sense as an active vector coming from compassion). Such acknowledgement is then of the entirety of the other and their becoming through spacetime (and not only limited to their observable physical form or function).

It is however, secondly, also applicable in thinking when acting with treatment (of the other and their expressions across spacetime), with repurposing, and in the relation in the world with that what one intends to claim or repurpose.

Thirdly, this word is perhaps surprisingly also applicable to synthetic tech output. One could think about how one group is presented (more than an other) in such output without their consent (to be presented as such). Such output could be an artificially generated visual (or other) that did not exist previously, nor was allowed the scale at which it could be mechanically reproduced or reiterated into quasi infinite digital versions.

Fourthly, through such a tech-lens one could relate the word with huge databases compiled & used to create patterns from the unasked-yet-claimed other or at least their (creative, artistic or other more or less desirable) output that is digital or digitized without consideration of the right to be forgotten or not be repurposed ad infinitum.

Fifthly, one could argue in nurturing future senses of various cultural references, that could be considered to also be applicable to those (alienated) creations of fellow humans who have long past, and yet who could be offered acknowledgement (as compensation for no longer being able to offer consent) by having (in a metadata file) their used work referenced.

As such I wish I could give ode to they or that what came before me when I prompted a Diffusion Model to generate this visual. However I cannot. Paradoxically, the machine is hyped to “learn” while humans are unilaterally decided for not to learn where their work is used or where the output following their “prompt” came from. I sense this as a cultural loss that I cannot freely decide to learn where something might have sprouted from. It has been decided for me that I must alienate these pasts without my consent whether or not I want to ignore these.

—-•

aiethics #aiaesthetics #aicivilization #meaningmaking #rhizomatichumanity

Post scriptum:

Through such cultural lens, as suggested above, this possible dissonance seems reduced in shared intelligence. To expand that cultural lens into another debated tech: the relation between reference, consent, acknowledgment and application seems as if an antithetical cultural anti-blockchain: severed and diffused.


<< Boutique Ethic >>

Thinking of what I label as “boutique ethic”, such as AI Ethics, must indeed come with thinking about ethics (Cf. here ). I think this is not only an assignment for the experts. It is also one for me: the layperson-learner.

Or is it?

Indeed, if seen through more-than a techno-centric lens alone, some voices do claim that one should not be bothered with ethics if one does not understand the technology which is confining ethics into a boutique ethic; e.g. “AI”. (See 2022 UNESCO report on AI curriculum in K-12). I am learning to disagree .

I am not a bystander, passively looking on, and onto my belly button alone. Opening acceptance to Noddings’ thought on care (1995, 187) : “a carer returns to the cared-for,” when in the most difficult situations principles fail us (Rossman & Rallis 2010). How are we caring for those affected by the throwing around of the label “AI” (as a hype or as a scarecrow)?

Simultaneously, how are we caring for those affected by the siphoning off of their data, for application, unknown to the affected, of data derived from them and processed in opaque and ambiguous processes? (One could, as one of the many anecdotes, summon up the polemics surrounding DuckduckGo and Microsoft, or Target and baby product coupons, and so on)

And yet, let us expand back to ethics surrounding the boutiqueness of it: the moment I label myself (or another such as the humans behind DuckDuckGo) as “stupid”, “monster”, “trash”, “inferior”, ”weird”, “abnormal;” “you go to hell” or other more colorful itemizations, is the moment my (self-)care evaporates and my ethical compass moves away from the “...unconditional worth of all human beings and the equal respect to which they are entitled” (Rossman & Rallis 2010). Can then a mantra come to the aid: ”carer, return to the cared-for”? I want to say: “yes”.

Though, what is the impact of the mantra if the other does not apply this mantra (i.e., DuckDuckGo and Microsoft)? And yet, I do not want to get into a yoyo “spiel” of:
Speaker 1:“you first”,
Speaker 2: “no, you first”,
Speaker 1: “no, really, you first”.
Here a mantra of: “lead by example, and do not throw the first or n-ed stone” might be applicable? Is this then implying self-censorship and laissez-faire? No.

I can point at DuckDuckGo and Microsoft as an anecdote, and I think I can learn via ethics, into boutique ethics, what this could mean through various (ethical and other) lenses (to me, to others, to them, to it) while respecting the act of the carer. Through that lens I might wonder what drove these businesses to this condition and use that as a next steppingstone in a learning process. This thinking would take me out of the boutique and into the larger market, and even the larger human community.

The latter is what I base on what some refer to as the “ethic of individual rights and responsibilities” (Ibid). It is my responsibility to learn and ask and wonder. Then I assume that, the action by an individual who has following been debased by a label I were to throw at them (including myself), as those offered in the preceding sentence, is then judged by the “respect to which they are entitled” (Ibid). This is then a principle assuming that “universal standards exist” (Ibid). And yet, on a daily basis, especially on communal days, and that throughout history: I hurdle. After all we can then play with words “what is respect and what type of respect are they indeed entitled to?”

I want to aim for a starting point of an “unconditional” respect, however naive that might seem and however meta-Jesus-esque or Ghandi-esque, Dr. King-esque, or Mandela-esque that would require me to become. Might this perhaps be a left libertarian stance? Can I “respectfully” throw the first stone? Or lies the eruption in the metaphorical of “throwing a stone” rather than the physical?

Perhaps there are non-violent responses that are proportional to the infraction. This might come in handy. I can decide no longer to use DuckDuckGo. However, can I decouple from Microsoft without decoupling from my colleagues, family, community? Herein the learning as activism might then be found in looking and promoting alternatives toward a technological ecosystem of diversity with transparency, robustness and explainability and fair interoperability.

Am I a means to their end?” I might ask then “or am I an end in myself?” This then brings me back to the roles of carer. Are, in this one anecdotal reference, DuckDuckGo and Microsoft truly caring about its users or rather about other stakeholders? Through a capitalist lens one might be inclined to answer and be done with it. However, I prefer to keep an openness for the future, to keep on learning and considering additional diversifying scenarios and acts that could lead to equity to more than the happy few.

Through a lens of thinking about consequences of my actions (which is said to be an opposing ethical stance compared to the above), I sense the outcome of my hurdling is not desirable. However, the introduction of alternatives or methods toward understanding of potentials (without imposing) might be. I do not desire to dismiss others (e.g., cast them out, see them punished, blatantly ignore them with the veil of silenced monologue). At times, I too believe that the act of using a label is not inherently right or wrong. So I hurdle, ignorant of the consequence to the other, their contexts, their constraints, their conditions and ignorant of the cultural vibe or relationships I am then creating. Yes, decomposing a relationship is creating a fragmented composition as much as non-dialog is dialog by absence. What would be my purpose? It’s a rhetorical question, I can guess.

I am able to consider some of the consequence to others (including myself), though not all. Hence, I want to become (more) caring. The ethical dichotomy between thinking about universals or consequence is decisive in the forming of the boutique ethic. Then again, perhaps these seemingly opposing ethics are falsely positioned in an artificial dichotomy. I tend to intuit so. The holding of opposing thought and dissonance is a harmony that simply asks a bit more effort that, to me, is embalmed ever so slightly by the processes of rhizomatic multidimensional learning.

This is why I want to consider boutique ethics while still struggling with being ignorant, yet learning, about types and wicket conundrums in ethics , at larger, conflicting and more convoluted scales. So too when considering a technology I am affected by yet ignorant of.

References

Gretchen B. R., Sharon F. R. (2010). Everyday ethics: reflections on practice, International Journal of Qualitative Studies in Education, 23:4, 379-391

Noddings, N. (1984). Caring: A feminine approach to ethics and moral education. Berkeley, CA: University of California Press.

Rawls, J. (1971). A theory of justice. Cambridge, MA: Harvard University Press.

Rossman, G.B., S.F. Rallis. (1998). Learning in the field: An introduction to qualitative research. Thousand Oaks, CA: Sage.

Rossman, G.B., S.F. Rallis. (2003). Learning in the field: An introduction to qualitative research. 2nd ed. Thousand Oaks, CA: Sage.

UNESCO. (2022). K-12 AI curricula-Mapping of government-endorsed AI curriculum.

<< Critique: not as a Worry nor Dismissal, but as Co-creative Collective Path-maker>>


In exploring this statement, I wish to take the opportunity to focus on, extrapolate and perhaps contextualize the word “worry” a bit here.

I sense “worry” touches on an important human process of urgency.

What if… we were to consider who might/could be “worried”, and, when “worry” is confused or used as a distracting label. Could this give any interesting insight into our human mental models and processes (not of those who do the worrying but rather of those using the label)?

The term might be unwittingly resulting as if a tool for confusion or distraction (or hype). I think to notice that “worry,” “opposition,” “reflection,” “anxiety” and “critical thought-exercises,” or “marketing rhetorics toward product promotion,” are too easily confused. [Some examples of convoluted confusions might be (indirectly) hinted at in this article: here —OR— here ]

To me, at least, these above listed “x”-terms, are not experienced as equatable, just yet.

As a species, within which a set of humans claims to be attracted to innovation, we might want to innovate (on) not only externals, or symptoms, but also causes, or inherent attributes to the human interpretational processes and the ability to apply nuances therewith, eg, is something “worrying” or is it not (only) “worrying” and perhaps something else / additional that takes higher urgency and/or importance?

I imagine that in learning these distinctions, we might actually “innovate”.

Engaging in a thought-exercise is an exercise toward an increase of considering altered, alternative or nuanced potential human pathways, towards future action and outcomes, as if exploring locational potentials: “there-1” rather then “there-2” or “there-n;” and that rather than an invitation for another to utter: “don’t worry.”

If so, critical thought might not need to be a subscription to “worry” nor the “dismissal” of 1 scenario, 1 technology, 1 process, 1 ideology, etc, over the other [*1]

Then again, from a user’s point of view, I dare venture that the use of the word “worry” (as in “I worry that…”) might not necessarily be a measurable representation of any “actual” state of one’s psychology. That is, an observable behavior or interpreted (existence of an) emotion has been said to be no guaranteed representation of the mental models or processes of they who are observed (as worrying). [a hint is offered here —OR— here ]

Hence, “worry” could be / is at times seemingly used as a rhetorical tool from either the toolboxes of ethos, pathos or logos, and not as an externalization of one’s actual emotional state of that ephemeral moment.

footnote
—-•
[*1]

Herein, in these distinctions, just perhaps, might lie a practical excercise of “democracy”.

If critical thought, rhetoric, anxiety, opposition are piled and ambiguously mixed together, then one might be inclined to self-censor due to the mere sense of overwhelming confusion of not being sure to be perceived as dealing with one over, or instead of, the other.

<< My Data’s Data Culture >>


Far more eloquently described, more then 15 years ago, by Lawrence Lessig, I too sense an open or free culture, and design there within, might be constrained or conditioned by technology , policy, community and market vectors.

I perceived Lessig’s work then to have been focused on who controls your cultural artifacts. These artifacts, I sense, could arguably be understood as types of (in)tangible data sets given meaningful or semiotic form as co-creative learning artifacts (by you and/or others).

I imagine, for instance, “Mickey Mouse” as a data set (perhaps extended, as a cognitive net, well beyond the character?). Mickey, or any other artifact of your choosing, aids one to learn about one’s cultural narratives and, as extended cognition, in positive feedback loops, about one self in communicative co-creation with the other (who is engaged in similar interactions with this and other datasets). However, engaging with a Mickey meant / means risking persecution under IPR (I wrote on this through an artistic lens here ).

Today, such data sets for one’s artificial learning (ie learning through a human made artifact) are (also) we ourselves. We are data. Provocatively: we are (made) artificial by the artificial. Tomorrow’s new psychoanalyst-teacher could very well be your friendly neighborhood autonomous data visualizer; or so I imagine.

Mapping Lessig, with the article below, and with many of the sources one could find (e.g.: Jason Silva, Kevin Kelly, Mark Sprevak, Stuart Russell, Kurzweil, Yuval Noah Harari, Kaśka Porayska-Pomsta ) I am enabled to ponder:

Who do the visualizations serve? Who’s privacy and preferences do they interfere with? Who’s data is alienated beyond the context within which its use was intended? Who owns (or has the IPR) on the data learned from the data I create during my co-creative cultural learning (e.g: online social networking, self-exhibition as well as more formal online learning contexts); allowing third parties to learn more about me then I am given access to learn about myself?

Moreover, differently from they who own Mickey, who of us can sue the users of our data, or the artifacts appropriated therefrom, as if it were (and actually is) our own IPR?

Given the spirit of artificial intelligence in education (AIED), I felt that the following article, published these past days on such data use that is algorithmically processed in questionable ethical or open manners, could resonate with others as well. (ethics , aiethics )

Epilogue — A quote:

“The FTC has required companies to disgorge ill-gotten monetary gains obtained through deceptive practices, forcing them to delete algorithmic systems built with ill-gotten data could become a more routine approach, one that modernizes FTC enforcement to directly affect how companies do business.”

References

https://www-protocol-com.cdn.ampproject.org/c/s/www.protocol.com/amp/ftc-algorithm-destroy-data-privacy-2656932186

Lessig’s last speech on free culture: here

Lessig’s Free Culture book: here

<< Demons and Demos >>


The New Yorker and NSO in some glorious spy-novel context here

…and further, as a cherry on this cake, one might quickly conjure up Cambridge Analytica , or singularly, Facebook with its clandestine 50000+ or so datapoints per milked data-cow (aka what I also lovingly refer to as humans as datacyborgs) which the company’s systems are said to distill through data collection . Yes, arguably the singularity is already here.

Then, more recently, one can enjoy the application by a facial recognition service, Clearview AI, that uses its data mining to identify (or read: “spy on”) dead individuals; a service which might seem very commendable (even for individuals with no personal social media accounts, one simply has to appear in someone else’s visual material); and yet the tech has been applied for more.

The contextualization might aid one to have the narrative amount to:

Alienation” and that, if one were to wish, could be extended with the idea of the “uncanny” hinted at with my datacyborg poetics. “Alienation” here is somewhat as meant as it is in the social sciences: the act of lifting the intended use of one’s data, outside of that intended use, by a third party. The questionable act of “alienation” is very much ignored or quietly accepted (since some confuse “public posting” with a “free for all”). 

What personally disturbs me is that the above manner of writing makes me feel like a neurotic conspiratorial excuse of a person… one might then self-censor a bit more, just to not upset the balance with any demonizing push-back (after all, what is one’s sound, educated and rational “demos” anyway?). This one might do while others, in the shadows of our silently-extracted data, throw any censorship, in support of the hidden self (of the other), out of the proverbial window.

This contextualised further; related to memory, one might also wish to consider the right to be forgotten besides the right to privacy. These above-mentioned actors among a dozen others, rip this autonomous decision-making out of our hands. If then one were to consider ethics mapped with the lack of autonomy one could be shiveringly delighted not to have to buy a ticket to a horror-spy movie since we can all enjoy such narratives for “free” and in “real” life. 

Thank you Dr. WSA for the trigger


Epilogue:

“Traditionally, technology development has typically revolved around the functionality, usability, efficiency and reliability of technologies. However, AI technology needs a broader discussion on its societal acceptability. It impacts on moral (and political) considerations. It shapes individuals, societies and their environments in a way that has ethical implications.”

https://ethics-of-ai.mooc.fi/chapter-1/4-a-framework-for-ai-ethics

…is ethics perhaps becoming / still as soothing bread for the demos in the games by the gazing all-seeing not-too-proverbial eye?

In extension to my above post (for those who enjoy interpretative poetics):

One might consider that the confusion of a “public posting” being equated with “free for all” (and hence falsely being perceived as forfeiting autonomy, integrity, and the likes), is somewhat analogous with abuses of any “public” commons.

Expanding this critically, and to some perhaps provokingly further, one might also see this confusion with thinking that someone else’s body is touch- or grope-for-all simply because it is “available”.

Now let’s be truly “meta” about it all: One might consider that the human body is digital now. (Ie my datacyborg as the uber-avatar. Moving this then into the extreme: if I were a datacyborg then someone else’s extraction beyond my public flaneuring here, in my chosen setting, could poetically be labeled as “datarape”)

As one might question the ethics of alienatingly ripping the biological cells from Henrietta Lacks beyond the extraction of her cancer into labs around the world, one might wonder about the ethics of data being ripped and alienated into labs for market experimentation and the infinite panopticon of data-prying someone’s (unwanted) data immortality

https://en.m.wikipedia.org/wiki/Henrietta_Lacks

<< Asimov’s Humans >>


As an absurd (or surreal-pragmatic compassion-imbued) thought-exercise, iterated from Asimov’s 1942 Laws  of Robotics, let us assume we substitute “robot” —the latter which etymologically can be traced to the Czech to mean as much as “forced labor”— with “human,” then one might get the following:

  • A human may not injure a human being or, through inaction, allow a human being to come to harm. [*1]
  • A human must obey the orders given them  by human beings except where such orders would conflict with the First Law. [*2]
  • A human must protect their own existence as long as such protection does not conflict with the First or Second Laws. [*3]

[*1]

It seems we humans do not adhere to this first law. If humans were not fully enabled to adhere to it, which techniques do and will humans put to practice as to constrain robots (or more or less forced laborer) to do so?

The latter, in the contexts of these laws, are often implied as harboring forms of intelligences. This, in turn, might obligate one to consider thought, reflection, spirituality, awareness, consciousness as being part of the fuzzy cloud of “intelligence” and “thinking”. 

Or, in a conquistadorian swipe, one might deny the existence or importance of these attributes, in the other but oneself, all together. This could then be freeing one’s own conscious of any wrongdoing and deviating one’s unique features as solely human. 

One might consider if humans were able to constrain a non-human intelligence, perhaps that non-human intelligence might use the same work-around as used by humans, enabling the latter to ignore this first law for their own species. Or, perhaps humans, in their fear of freedom, would superimpose the same tools which are invented toward the artificially intelligent beings, upon themselves. 

[*2] 

The attribute of being forced into labor seems not prevalent, except in “must obey.” Then again, since the species, in the above version of the three laws, is no longer dichotomized (robot vs human), one might (hope to) assume here that role of the obeying human could be interchangeable between the obeying human agent and the ordering human agent. 

Though, humans have yet to consider Deleuze’s and Guattari’s rhizomic (DAO) approach for themselves, outside of technological networks, blockchains and cryptocurrencies, which, behind the scenes of these human technologies, are imposingly hierarchical (and authoritarian, or perhaps tyrannical at times) upon, for instance, energy use, which in turn could be seen as violating Law 1 and Law 3. 

Alternatively, one might refer to the present state of human labor in considering the above, and argue this could all be wishful thinking. 

If one were to add to this a similarly-adapted question from Turing (which he himself dismissed) of “can a human think?”

The above would be instead of the less appropriated versions of “can a machine think?” (soft or hard) or “can a computer think?” (soft or hard). If one were to wonder “can a a human think?”, then one is allowing the opening of a highly contested and uncomfortable realm of imagination. Then again, one is imposing this on any artificial form or any form that is segregated from the human as narrated as “non-human” (ie fauna or flora, or rather, most of us limit this to “animal”).

As a human law: by assigning irrational or non-falsifiable attributes, fervently defendable as solely human, by fervently taking away these same attributes from any other then oneself, one then has allowed oneself to justify dehumanizing the other (human or other) into being inferior or available for forced labor.

[*3]

This iterated law seems continuously broken.

If one then were to consider human generations yet to be born (contextualized by our legacies of designed worlds and their ecological consequences), one might become squeamish and prefer to hum a thought-silencing song, which could inadvertently revert one back to the iteration of Turing’s question: “can humans think?”

The human species also applies categorizing phrasing containing “overthink”, “less talking more doing”, “too cerebral,” and so on. In the realm of the above three laws, and this thought-exercise, these could lead to some entertaining human or robot (ie in harmony with its etymology a “forced laborer”) paradoxes alike:

“could a forced laborer overthink?”
“could a forced laborer ever talk more than do?”
“could a forced laborer be too cerebral?” One might now be reminded of Star War’s slightly neurotic C-3PO or of a fellow (de)human.

—animasuri’22

Thought-exercise perversion #002 of the laws:

<< Asimov’s Humans #2 >>

“A human may not injure a robot or, through inaction, allow a robot to come to harm.”

“A human must obey the orders given them by robots except where such orders would conflict with the First Law.”

“A robot must protect their own existence as long as such protection does not conflict with the First or Second Laws.”


—animasuri’22

Thought-exercise perversion #003 of the laws:

<< Asimov’s Neo-Humans #3 >>

“A robot may not injure a robot or, through inaction, allow a robot to come to harm.”

“A robot must obey the orders given them by robots

except where such orders would conflict with the First Law.”

“A robot must protect their own existence as long as such protection does not conflict with the First or Second Laws.”

                                                   —animasuri’22