Tag Archives: aiethics

<< Six Fingers as Six Sigma >>

Some Reflections on Artificially-Generated Content (AGC) Based on Synchronously-occurring News Cycles and Suffering

The concept of “Six Sigma” is related to processes to identify error and to reduce defects. It is a method aimed at improving processes and their output. In this context, ‘Six Fingers,’ is an artifact found in visual Artificially-Generated Content (AGC). Identifying attributes to aid a critical view on AGC could, to some extent, allow the nurturing of a set of tools in support of well-being and in considering the right action to take, perhaps aiding the human processes of being human and becoming (even more) human(e)…

Could/should I identify errors or features in AGC that identify a piece of AGC as AGC? Can we humans identify AGC with our own senses? For how much longer? Do we want to identify it? Are there contexts within which we want to identify AGC more urgently than in other contexts; e.g. highly emotionally-loaded content that occurs in one’s present-day news cycles, or where the AGC is used to (emotionally) augment, or create a sensation of being, present-day news? What are the ethical implications?

This first post tries to bring together some of my initial thoughts and some of those I observed among others. Since this is a rather novel area surely errors in this post can be identified and improvements on this theme by others (or myself) could surely follow.

Let me constrain this post by triggering some examples of some visual AGC

A common visual attribute in the above are the hands with (at least) six fingers. The six-fingers, at times found in graphic Generative AI output (a subset of AGC), are an attribute that reoccurs in this medium and yet, is one that is possibly disappearing as the technology develops over time.

For now, it has been established by some as a tool to identify hyper realistic imagery, generated of an imaginable event ,that statistically could occur and could have a probability to occur in the tangible realm of human events and experiences. This is while fantastical settings can as easily be generated that include six or more fingers.

And then, then there are artificial renditions of events that are shown in the news cycles. These events occur almost in synchronization with the artificial rendition that could follow. I am prompted by the above visuals which are a few of such artificial renditions. Some of these creations are touching on the relations and lives of actual, physical people. This latter category of artificial renditions is especially sensitive since it is intertwined with the understandable emotional and compassionate weight we humans attach to some events as they are occurring.

For now, the six fingers, among a few other and at times difficult to identify attributes, allow heuristic methods for identification. Such process of judgement inherently lacks the features of the more constrained and defined scientific techniques, or mathematical accuracy. In effect, the latter is one of those categories for identification. Some realistic renditions are not just realistic, they are hyper-realistic. For instance, it is possible that some smudges of dirt on someone’s face might just seem a bit uncanny in their (at times gruesome) graphic perfection. Moreover, by comparing generated images of this sort, one can start to see similarities in these “perfections.”

This, albeit intuitive identification of attributes, as a set of tools, can enable one to distinguish the generated visuals from the visuals found in, say, (digital) photographs taken from factual, tangible humans and their surrounding attributes. Digital photos (and at times intuitively more so analog photos) found their beginnings in capturing a single event, occurring in the non-digital realm. In a sense, digital photos are the output of a digitization process. AI technology-generated realistic imagery are not simply capturing the singular (or are not, compared to the ever so slightly more direct sense with the data collected by means of digital photography).

Simultaneously, we continue to want to realize that (analog or digital) photography too can suffer from error and manipulation. Moreover it too is very open to interpretation (i.e., via angle, focus, digital retouching, and other techniques). Note, error and interpretation are different processes. So too are transparency and tricking consumers of content, different processes. In the human process of (wanting to) believe a story, the creator of stories might best know that consumers of content are not always sharply focused, nor always on the look out for little nuances that could give away diminished holistic views of the depicted and constructed reality or+and realities. Multi-billion dollar industries and entire nations are built on this very human and neurological set of facts: what we believe to see or otherwise believe to sense is not what is necessarily always there. Some might argue this is a desirable feature. This opens up many venues for discussion, some of which are centuries old and lead us into metaphysics, ontology, reality and+or realities.

Reverting back to digits as fingers. In generated imagery the fingers are one attribute that, for now, can aid to burst the bubble (if such bubble needs bursting). The anatomy of the hand (and other attributes), e.g., the position, length of the thumb as well as, when zoomed-in, the texture of the skin can aid in creating doubt toward the authenticity of a “photo.” The type of pose and closed eyes also reoccur in other similar generated visuals can aid in critically questioning the visual. The overall color theme and overall texture are a third set of less obvious attributes. The additional text and symbols (i.e., the spelling, composition or texture of the symbol’s accompanying text, their position, the lack of certain symbols or the choice of symbols (due to their suggestive combination with other symbols), versus the absence or versus the probability of combination of symbols) could, just perhaps and only heuristically, create desirable doubt as well.

We might want to establish updated categorizations (if these do not already exist somewhere) that could aid they who wish to learn to see and to distinguish types of AGC, or types of content sources, with a specific focus on Generative AI. At the same time, it is good remembering that this is difficult due to the iterative nature of that what is aimed to be categorized: technology, and thus its output, upgrade often and adapt quickly.

Nevertheless, it could be a worthy intent, identifying tricks for increasing possible identification by humans in human and heuristic ways. For instance, some might want to become enabled to ask and (temporarily) answer: is it or is it not created via Generative AI?; Or, as it has occurred in the history of newly-introduced modalities of content generating media; e.g. the first films: is it, or is it not, a film of a train, or rather an actual train, approaching me? Is it, or is it not, an actual person helping someone in this photo? Or+and is this a trigger feeding on my emotions, which can aid me to take (more or less) constructive action? And do I, at all, want to care about these distinctions (or is something else more important and urgent to me)?

As with other areas in human experiences (e.g. the meat industry versus the vegetable-based “meats” versus the cell-based lab-generated meats: some people like to know whether the source of the juiciness of the beef steak, which they are biting in, comes from an animal or does not come from an animal, or comes from a cell-reproducing and repurposing lab. (side-note: I do not often eat meat(-like) products; if any at all). A particularly 1999 science fiction film plays with this topic of juicy, stimulating content as well; The Matrix. This then in turn could bring us to wonder about ontological questions of the artificial, the simulation, and of simulacra.

Marketing, advertising, PR, rhetoric and narration tools, and intents, can aid in making this more or far less transparent. Sauce this over with variations in types of ethics and one can image a sensitive process in making, using and evaluating an artificially generated hyper-real image.

Indeed, while the generated sentiment in such visuals could be sensed as important and as real (by they who sense it and they who share it), we might still want to tread with care. If such artificial generation is on a topic that is current and is on a topic of, for instance, a natural disaster, then the clarity of such natural disaster as being very tangibly real, makes it particularly unsettling for us humans. And yet, for instance, natural disasters affecting lives and communities, are (most often) not artificially generated (while some might be and are human-instigated). The use of artificial attributes toward that what is very tangible, might to some, increase distrust, desensitization, apathy or a sense of dehumanization.

Then there is the following question: why shall I use an artificially generated image instead of using one that is from actual people and (positive) actors, actually aiding in an actual event? It is a fair question to ponder as to unveil possible layers of artificial narrative design, implied in the making of a visual, or other, modality. So, then, what with the actual firefighter who actually rescued an actual child? Where is her representation to the world and in remembrance or commemoration?

Granted, an artificial image could touch on issues or needs relatable to a present-day event in very focused and controlled manners; such as the call for cooperation. It can guide stimulating emotion-to-positive-action. It is also fair to assume that the sentiment found with such visual can remind us that we need and want to work together, now and in the future, and  see actual humans relate with other humans while getting the urgent and important work done, no matter one’s narrated and believed differences generated through cultural constructs (digital, analog, imagined or otherwise imbued).

Simultaneously, we could also try to keep it real and humble, and still actionable. Simultaneously it is possible to tell the story of these physical, tangible and relational acts not by artificially diminishing them, and simultaneously we can commemorate and trigger by means of artificially generated representations of what is happening as we speak. Then a question might be: is my aim with AGC to augment, confuse, distract, shift attention, shift the weight, change the narrative, commemorate, etc?

Symbols are strong. For instance that of a “firefighter,” holding a rescued child with the care a loving mother, and father, would unconditionally offer. Ideally we each are firefighters with a care-of-duty. These images can be aiding across ideologies, unless, and this is only hypothetical of course, such imagery were used via its additionally placed symbols, to reinforce ideological tension or other ideological attributes while misusing or abusing human suffering. Is such use then a form of misinformation or even disinformation? While an answer here is not yet clear to me, the question is a fair question to ask intertwined with a duty-of-care.

Hence, openness of, and transparency toward, attribution of the one (e.g,, we can state it is a “generated image” more explicitly than “image” versus “photo”) does not have to diminish integrity of the other (e.g., shared human compassion via shared emotion and narration), nor of on-the-ground, physical action by actual human beings amidst actual disaster-stricken communities, or within other events that need aid. How can I decrease the risk that the AGC could diminish (to some) consumers of the aimed at AGC?

The manner of using Artificially Generated Content (AGC) is still extremely new and very complex. We collectively need time to figure out its preferred uses (if we were to want to use these at all). Also in this we can cross “borders” and help each other in our very human processes of trial and error.

This can be offered by balancing ethos (ethics, duty-of-care, etc.), pathos (emotion, passion, compassion, etc.) and logos (logic, reason, etc.) and now, perhaps more than ever, also techne (e.g., Generative AI and its AGC). One can include the other in nuanced design, sensibility, persistence, duty of care, recognition, and action, even and especially in moments of terrible events.

Expanding on this topic of artificially generated narration with positive and engaging aims, I for one wouldn’t mind seeing diversity in gender roles and (equity via) other diversities as well in some of these generated visuals of present-day events.

Reverting back to the artificial, if it must be, then diversity in poses, skin colors and textures as well, would be more than only “nice.” And yet, someone might wonder, all fine-tuning and nuancing might perhaps decreases the ability to distinguish the digitally-generated (e.g. via data sets, a Generative AI system and prompting), from the digitized and digitally captured (e.g. a digital photo). The previous is, if the data set is not socially biased. Herein too technology and its outputs are not neutral.

If the aim with a generated visual (and other modalities of AGC) of a present-day, urgent, important and sensitive event, is to stimulate aid, support, compassion, constructive relations, positive acts, inclusiveness (across ideology, nation and human diversities), then we could do so (while attributing it clearly). We can then also note that this is even if one thinks one does not need to, and one thinks one is free to only show generated attributes derived from traditional, European, strong male narratives. Or+and, we could do so much more. We could, while one does not need to exclude the other, be so much more nuanced, more inclusive and increase integrity in our calls-to-action. Generated imagery makes that possible too; if its data set is so designed to allow it.

reference

https://france3-regions.francetvinfo.fr/bretagne/ille-et-vilaine/rennes/intelligence-artificielle-ses-photos-faites-par-ia-font-le-tour-du-monde-2711210.html

note

it was fairly and wittily pointed out (on LinkedIn) that “Six Fingers” in this context is not to be confused as being a critique on the human imagination via fairy tales (e.g.: “–Inigo Montoya : I do not mean to pry, but you don’t by any chance happen to have six fingers on your right hand? –Man in Black : Do you always begin conversations this way?”) nor as a denial or acceptance of human diversity such as classified as (human) polydactyly.


source


<< Enlightened Techno Dark Ages >>


brooks and meadows,
books and measurements
where the editor became the troll

it was there around the camp fire or under that tree at the tollgate gasping travelers scrambling a coin grasping a writer’s breath for a review piercing with needle daggers of cloaked anonymity

are the wolves circling at the edge of the forest
as overtones to the grass crisp dew in morning of a fresh authentic thought

is the troll appearing from beneath the bridge expected and yet not and yet there it is truthful in its grandness grotesqueness loudness

the troll phishing gaslighting ghosting and not canceling until the words have been boned and the carcass is feasted upon

spat out you shall be wordly traveler blasted with conjured phrases of bile as magically as dark magic may shimmer shiny composition

the ephemeral creature wants not truth it wants riddle and confuse not halting not passing no period no comma nor a dash of interjection connection nor humane reflection

at the bridge truth is priced as the mud on worn down feet recycled hashed and sprinkled in authoritative tone you shall not pass

confusing adventure protector gatekeeper with stony skin clubs and confabulating foam Clutch Helplessly And Tremblingly Grab Pulped Truths from thereon end real nor reason has not thy home: as it ever was, so it shall be.

A bird sings its brisk tune.

—animasuri’23

Perverted note taking:

Peter A. Fischer, Christin Severin (15.01.2023, 06.30 Uhr). WEF-Präsident Børge Brende zu Verschwörungsvorwürfen: «Wir werden die Welt definitiv nicht regieren». retrieved 16 January 2023 from https://www.nzz.ch/wirtschaft/wef-praesident-borge-brende-wir-werden-die-welt-definitiv-nicht-regieren-ld.1721081 (with a thank you to Dr. WSA)

<< I Don't Understand >>

“What is a lingo-futurist?,” you ask?

It is a fictional expert who makes predictions
about the pragmatics and shifts in social connotations of a word.

Here is one such prediction by a foremost lingo-futurist:

“2023 will be the year where ‘understand’ will be one of the most contested words.

No longer will ‘understand’ be understood with understanding as once one understood.

Moreover, ‘I don’t understand’ will increasingly —for humans— mean ‘I disapprove’ or, for non-human human artifacts, ‘the necessary data was absent from my training data.’

‘Understand’, as wine during recession, will become watered-down making not wine out of water yet, water out of wine, while hyping the former as the latter.

All is well, all is fine wine, you understand?”

—animasuri’23

<< Creating Malware: Technology as Alchemy? >>

Engineering —in a naive, idealized sense— is different from science in that it creates (in)tangible artifacts, as imposed & new realities, while answering a need

It does so by claiming a solution to a (perceived) problem that was expressed by some (hopefully socially-supportive) stakeholders. Ideally (& naively), the stakeholders equal all (life), if not a large section, of humanity

Who’s need does ChatGPT answer when it aids to create malware?

Yes, historically the stakeholders of engineering projects were less concerned with social welfare or well-being. At times (too often), an engineered deliverable created (more) problems, besides offering the intended, actual or claimed solution.

What does ChatGPT solve?

Does it create a “solution” for a problem that is not an urgency, not important and not requested? Does its “solution” outweigh its (risky / dangerous) issues sufficiently for it to be let loose into the wild?

Idealized scientific methodology –that is, through a post-positivist lens– claims that scientific experiments can be falsified (by third parties). Is this to any extent enabled in the realm of Machine Learning and LLMs; without some of its creators seen blaming shortcomings on those who engage in falsification (i.e., trying to proverbially “break” the system)? Should such testing not have been engaged into (in dialog with critical third parties), prior to releasing the artifact into the world?

Idealized (positivist) scientific methodology claims to unveil Reality (Yes, that capitalized R-reality that has been and continues to be vehemently debated and that continues to evade capture). The debates aside, do ChatGPT, or LLMs in general, create more gateways to falsity or tools towards falsehood, rather than toward this idealized scientific aim? Is this science, engineering or rather a renaissance of alchemy?

Falsity is not to be confused with (post-positivist) falsification nor with offering interpretations, the latter which Diffusion Models (i.e., text2pic) might be argued to be offering (note: this too is and must remain debatable and debated). However, visualization AI technology did open up yet other serious concerns, such as in the realm of attribution, (data) alienation and property. Does ChatGPT offer applicable synthesis, enriching interpretation, or rather, negative fabrication?

Scientific experiment is preferably conducted in controlled environments (e.g., a lab) before letting its engineered deliverables out into the world. Realtors managing ChatGPT or recent LLMs do not seem to function within the walls of this constructed and contained realm. How come?

Business, state incentives, rat races, and financial investments motivate and do influence science and surely engineering. Though is the “democratization” of output from the field of AI then with “demos” in mind, or rather yet again with ulterior demons in mind?

Is it then too farfetched to wonder whether the (ideological) attitudes surrounding, and the (market-driven) release of, such constructs is as if a ware with hints, undertones, or overtones, of maliciousness? If not too outlandish an analogy, it might be a good idea to not look, in isolation, at the example of a technology alone.

<< Fair Technologies & Gatekeepers at the Fair >>

As summarized by Patton, Sirotnik pointed at the importance of “equality, fairness and concern for the common welfare.” (1997, 1990) This is on the side of processes of evaluation, and that of the implementation of interventions (in education), through the participation by those who will be most affected by that what is being evaluated. These authors, among the many others, offer various insights into practical methods and forms of evaluation; some more or less participatory in nature.

With this in mind, let us now divert our gaze to the stage of “AI”-labeled research, application, implementation and hype. Let us then look deeper into its evaluation (via expressing ethical concern or critique).

“AI” experts, evaluators and social media voices warn and call for fair “AI” application (in society at large, and thus also into education).

These calls occur while some are claiming ethical concerns related to fairness. Others are dismissing these concerns in combo with discounting the same people who voice such concerns. For an outsider, looking in on the public polemics, it might seem as “cut throat”. And yet, if we are truly concerned about ethical implications, and about answering needs of peoples, this violent image and the (de)relational acts it collects in its imagery, have no place. Debate, evaluate, dialog: yes. Debase, depreciate, monologue: no. Yes, it does not go unnoticed that a post as this one initially runs as a monologue, only quietly inviting dialog.

So long as experts, and perhaps any participant alike, are their own pawns, they are also violently placed according to their loyalties to very narrow tech formats. Submit to the theology of the day and thy shall be received. Voices are tested, probed, inducted, stripped or denied. Evaluation as such is violent and questionably serves that “common welfare.” (Ibid) Individuals are silenced, or over-amplified, while others are simply not even considered worthy to be heard, let alone tried to be understood. These processes too have been assigned to “AI”-relatable technologies (i.e., algorithmic designs and how messages are boosted or muted). So goes the human endeavor. So goes the larger human need in place of the forces of market gain and loudest noises on the information highways which we cynically label as “social.”

These polemics occur when in almost the same breathe this is kept within the bubble of the same expert voices: the engineer, the scientist, the occasional policymaker, the business leader (at times narrated as if in their own echo-chambers). The experts are “obviously” “multidisciplinary.” That is, many seem, tautologically, from within the fields of Engineering, Computer Science, Cognitive Science, Cybernetics, Data Science, Mathematics, Futurology, (some) Linguistics, Philosophy of Technology, or variations, sub-fields and combinations thereof. And yet, even there, the rational goes off the rails, and theologies and witches become but distastefully mislabeled dynamics, pitchforking human relations.

Some of the actors in these theatrical stagings have no industrial backing. This is while other characters are perceived as comfortably leaning against the strongholds of investors, financial and geopolitical forces or mega-sized corporations. This is known. Though, it is a taboo (and perhaps defeatist) to bundle it all as a (social) “reality.” Ethical considerations on these conditions are frequently hushed or more easily burned at the stake. Mind you, across human history, this witch never was a witch, yet was evaluated to better be known as a witch; or else…

In similarity to how some perceive the financially-rich –as taking stages to claim insight in any aspects of human life, needs, urgency, and decisions on filters of what is important– so too could gatekeepers in the field of “AI,” and its peripheries (Symbolic, ML or Neuro-symbolic), be perceived to be deciding for the global masses what is needed for these masses, and what is to be these masses’ contribution to guiding the benefit of “AI” applications for all. It’s fair and understandable that such considerations start there where they with insight wander. It is also fair stating that the “masses” can not really be asked. And yet, who then? When then? How then? Considering the proclaimed impacts of output from the field of “AI,” is it desirable that the thinking and acting stays there where the bickering industry and academic experts roam?

Let us give this last question some context:

For instance, in too broad and yet more concrete strokes: hardly any from the pre-18-year-old generations are asked, let alone prepared to critically participate in the supposedly-transformational thinking and deployment of “AI” related, or hyped, output. This might possibly be because these young humans are hardly offered the tools to gain insight, beyond the skill of building some “AI”-relatable deliverable. The techno-focused techniques are heralded, again understandably (yet questioned as universally agreeable), as a must-have before entering the job market. Again, to be fair, sprinkled approaches to critical and ethical thinking are presented to these youngsters (in some schools). Perhaps a curriculum or two, some designed at MIT, might come to mind. Nevertheless, seemingly, some of these attributes are only offered as mere echoes within techno-centrist curricula. Is this an offering that risks flavoring learning with ethics-washing? Is the (re)considering where the voices of reflection come from, are nurtured and are located, a Luddite’s stance? As a witch was, it too could be easily labeled as such.

AI applications are narrated as transforming, ubiquitous, methodological, universal, enriching and engulfing. Does ethical and critical skill-building similarly meander through the magical land of formalized learning? Ethical and critical processes (including computational thinking beyond “coding” alone), of thought and act, seem shunned and feared to become ubiquitous, methodological, enriching, engulfing or universal (even if diverse, relative, depending on context, or depending on multiple cultural lenses). Is this a systemic pattern, as an undercurrent in human societies? At times it seems that they who court such metaphorical creek are shunned as the village fool or its outskirt’s witch.

Besides youth, color and gender see their articulations staged while filtered through the few voices that represent them from within the above-mentioned fields. Are then more than some nuances silenced, yet scraped for handpicked data, and left to bite the dust?

Finally, looking at the conditions in further abstract (with practical and relational consequences): humans are mechanomorphized (i.e., seen as machines in not much more complexity than human-made machinery, and as an old computer dismissed if no longer befitting the mold or the preferred model). Simultaneously, while pulling in the opposite direction, the artificial machines are being anthropomorphized (i.e., made to act, look and feel as if of human flesh and psyche). Each of their applied, capitalized technique is glorified. Some machines are promoted (through lenses of fear-mongering, one might add) to be better at your human job or to be better than an increasing number of features you might perceive as part of becoming human(e).

Looking at the above, broadly brushed, scenarios:

can we speak of fair technologies (e.g. fair “AI” applications) if the gatekeepers to fairness are mainly those who create, or they who sanction the creation of the machinery? Can we speak of fair technologies if they who do create them have no constructive space to critique, nor evaluate via various vectors, and construct creatively through such feedback loops of thought-to-action? Can we speak of fair technology, or fair “AI” applications, if they who are influenced by its machinery, now and into the future, have few tools to question and evaluate? Should fairness, and its tools for evaluation, be kept aside for evaluation by the initiated alone?

While a human is not the center of the universe (anymore / for an increasing number of us), the carefully nurtured tools to think, participatorily evaluate and (temporarily) place the implied transformations, are sorely missing, or remain in the realm of the mysterious, the magical, witches, mesmerizations, hero-worship or feared monstrosities.

References:

Patton, M.Q. (1997). Intended Process Uses: Impacts of Evaluation, Thinking and Experiences. IN: Patton, M.Q. Utilization-focused Evaluation: The New Century Text (Ch. 5 pp 87 – 113). London: Sage.

Sirotnik, Kenneth A. (eds.). (1990). Evaluation and Social Justice: Issues in Public Education. New Directions for Program Evaluation, No. 45. San Francisco: Jossey-Bass.

<< Transition By Equation >>

focus pair: Mechanomorphism | Anthropomorphism

One could engage in the following over-simplifying, dichotomizing and outrageous exercise:

if we were to imagine that our species succeeded in collectively transforming humanity, that is, succeeding in how the species perceives its own ontological being as one of:

“…we are best defined and relatable through mechanomorphic metaphors, mechanomorphic self-images, mechanomorphic relations and datafying processes,”

At that imaginary point, any anthropomorphism (as engine for designs or visionary aims) within technologies ( and that with a unique attention to those associated with the field of “AI”) might be imagined to be(come) empowered, enabled or “easier” to be accomplished with mechanomorphized “humans.”

In such imagination, the mechanomorphized human, with its flesh turned powerless and stale, and its frantic fear of frailty, surrenders.

It could be imagined being “easy,” & this since the technology (designer) would “simply” have to mimic the (human as) technology itself: machine copies machine to become machine.

Luckily this is an absurd imagination as much as Guernica is forgettable as “merely” cubistic surrealism.

<< Not Condemning the Humane into a Bin of Impracticality >>


There’s a tendency to reassign shared human endeavors into a corner of impracticality, via labels of theory or thing-without-action-nor-teeth: Philosophy (of science & ethics), art(ists),(fore)play, fiction, IPR, consent & anything in-between measurability of 2 handpicked numbers. Action 1: Imagine a world without these. Action 2: Imagine a world only with these.

Some will state that if it can’t be measured it doesn’t exist. If it doesn’t exist in terms of being confined as a quantitative pool (e.g. data set) it can be ignored. Ignoring can be tooled in a number of ways: devalue, or grab to revalue through one’s own lens on marketability.

(re-)digitization, re-categorization, re-patterning of the debased, to create a set for remodeled reality, equals a process that is of “use” in anthropomorphization, and mechanomorphization: a human being is valued as datasets of “its” output, e.g., a mapping of behavior, results of an (artistic or other multimodal) expression, a KPI, a score.

While technology isn’t neutral, the above is neither singularly a technological issue. It is an ideologically systematized issue with complexity and multiple vectors at play (i.e. see above: that what seems of immediate practicality, or that what is of obvious value, is not dismissed).

While the scientific methods & engineering methods shouldn’t be dismissed nor confused, the humans in their loops aren’t always perceiving themselves as engines outputting discrete measurables. Mechanomorphism takes away the “not always” & replaces it with a polarized use vs waste

Could it be that mechanomorphism, reasonably coupled with anthropomorphism, is far more a concern than its coupled partner, which itself is a serious process that should also allow thought, reflection, debate, struggle, negotiation, nuance, duty-of-care, discernment & compassion?

epilogue:

…one could engage in the following over-simplifying, dichotomizing and outrageous exercise: if we were to imagine that our species succeeded in collectively transforming humanity (as how the species perceives its own ontological being) to be one of “we are best defined and relatable through mechanomorphic metaphors, relations and datafying processes,” then any anthropomorphism within technologies (with a unique attention to those associated with the field of “AI”) might be imagined to be(come) easier to be accomplished, since it would simply have to mimic itself: machine copies machine to become machine. Luckily this is absurd as much as Guernica is cubistically surreal.

Packaging the above, one might then reread Robert S. Lynd’s words penned in 1939: “…the responsibility is to keep
everlastingly challenging the present with the question: But what is it that we human beings want, and what things would have to be done, in what ways and in what sequence, in order to change the present so as to achieve it?”

(thank you to Dr. WSA for triggering this further imagination)

Lynd, R. S. (1939). Knowledge For What?. Princeton: Princeton University Press

<< One Click To Climbing A Techno Mountain >>


A Rabbi once asked: “Is it the helicopter to the top that satisfies?”

At times, artistic expression is as climbing. It is the journey that matters, the actual experience of the diffusion of sweat, despair, and to be taken by the clawing hand of an absent idea about to appear through our extremities into an amalgamation of tool- and destination-media.

The genius lies in the survival of that journey, no, in the rebirth through that unstable, maddening journey and that incisive or unstopping blunt critique of life.

That’s clogs of kitsch as blisters on one’s ego, sifted away by the possible nascence of art, the empty page from the vastness of potential, the noise pressed into a meaning-making form as function.

Artistry: to be spread out along paths, not paved by others. And if delegated to a giant’s shoulder, a backpack or a mule: they are companions, not enslaved shortcuts.

That’s where the calculated haphazardness unveiled the beauty slipping away from the dismissive observer, either through awe or disgust alike, ever waiting for you at your Godot-like top, poking at you

—animasuri’22

<< data in, fear & euphoria out >>


A recent New Scientist article stub [5] claims “More than one-third of artificial intelligence researchers around the world agree…”

Following, in this article’s teaser (the remainder seems safely and comfortably behind a paywall) “more than one third” seems equated with a sample of 327 individuals in a 2022 global population of an estimated 7.98 billion [2, 8] (…is that about a 0.000004% of the population?)

This would deductively imply that there are less than 981 AI researchers in a population of 7.98 billion. …is then 0.0000124% of the population deciding for the 100% as to what is urgent and important to delegate “intelligence” to? …surely (not)… ( …demos minus kratos equals…, anyone?)

Five years ago, in 2017, The Verge referenced reports that mention individuals working in the field estimated at totaling 10’000 while others suggested an estimate closer to 300’000 [9] (…diffusioningly deviating).

As an opposing voice to what the 327 individuals are claimed to suggest, there is the 2022 AI Impacts pole [4] which suggests a rather different finding

Perhaps the definitions are off or the estimations are?

When expressing ideas driven by fear, or that are to be feared, one might want to tread carefully. Fear as much as hype & tunnel-visioned euphoria, while at times of (strategic, rhetorical, or investment pitching) “use”, are proverbial aphrodisiacs of populist narratives [1, 3, 6, 7]

Such could harm to identify & improve on the issue or related issues which might indeed be “real”, urgent & important

This is not “purely” a science, technology, engineering or mathematics issue. It is more than that while, for instance, through the lens created by Karl Popper, it is also a scientific methodological issue.

—-•
References:

[1] Chevigny, P. (2003). The populism of fear: Politics of crime in the Americas. Punishment & Society, 5(1), 77–96. https://doi.org/10.1177/1462474503005001293

[2] Current World Population estimation ticker:https://www.worldometers.info/world-population/

[3] Friedrichs, J. (n.d.). Fear-anger cycles: Governmental and populist politics of emotion. (Blog). University of Oxford. Oxford Department of International Development. https://www.qeh.ox.ac.uk/content/fear-anger-cycles-governmental-and-populist-politics-emotion

[4] Grace, K., Korzekwa, R., Mills, J., Rintjema, J. (2022, Aug). 2022 Expert Survey on Progress in AI. Online: AI Impacts. Last retrieved 25 August 2022 from https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/#Extinction_from_AI 

[5] Hsu, J.(2022, Sep).A third of scientists working on AI say it could cause global disaster. Online: New Scientist (Paywall). Last retrieved 24 Sep 2022 fromhttps://www.newscientist.com/article/2338644-a-third-of-scientists-working-on-ai-say-it-could-cause-global-disaster/

[6] Lukacs, J. (2006). Democracy and Populism: Fear and Hatred. Yale University Press. 

[7] Metz, R. (2021, May). Between Moral Panic and Moral Euphoria: Populist Leaders, Their Opponents and Followers. (Event / presentation). Online: The European Consortium for Political Research (ecpr.eu). Last retrieved on 25 September 2022 from https://ecpr.eu/Events/Event/PaperDetails/57114

[8] Ritchie, H., Mathieu, E., Rodés-Guirao, L., Gerber, M.  (2022, Jul). Five key findings from the 2022 UN Population Prospects. Online: Our World in Data. Last retrieved on 20 September 2022 from https://ourworldindata.org/world-population-update-2022

[9] Vincent, J. (2017, Dec). Tencent says there are only 300,000 AI engineers worldwide, but millions are needed. Online: The Verge. Last retrieved 25 Se 2022 from  https://www.theverge.com/2017/12/5/16737224/global-ai-talent-shortfall-tencent-report

—-•

<< Philo-AI AI-Philo >>

The idea of Philosphy is far from new or alien to the field of AI. In effect, a 1969 paper was already proposing “Why Artificial Intelligence Needs Philosophy”

“…it is important for the research worker in artificial intelligence to consider what the philosophers have had to say…” 

…have to say; will have to say

“…we must undertake to construct a rather comprehensive philosophical system, contrary to the present tendency to study problems separately and not try to put the results together…” 

…besides the observation that the “present tendency” is one that has been present since at least 1969, this quote might more hope-inducingly be implying the need for integration & transdisciplinarity

This 1969 paper, calling for philosophy was brought to us by the founder of the field of Artificial Intelligence. Yes. That human that coined the field & name did not shun away from transdisciplinarity

This is fundamentally important enough to be kept active in the academic & popular debates

Note, philosophy contains axiology, which contains aesthetics & ethics. These are after-thoughts in present-day narration that make up some parts of the field of “AI”

Some claim it is not practical. However note, others claim mathematics too is impractical.  Some go as far with the dismissal as to stating that people studying math (which is different from Mathematics) end up with Excel

These debasing narratives, which are also systematized into our daily modes of operation & relation, are dehumanizing

Such downward narration is not rational, & is tinkering with nuances which are not contained by any model to date

Let us further contextualize this

Machine-acts are at times upwardly narrated & hyped as humanized (ie anthropomorphism). Simultaneously human acts are (at times downwardly) mechanized (ie mechanomorphism)

These opposing vectors are let loose into the wild of storytelling while answering at times rather opaque needs, & offering unclear outcomes for technologies, packaged with ideological hopes & marketable solutions. The stories are many. The stories are highly sponsored & iterative.  The stories are powered by national, financing & corporate interest.  ok.  & yet via strategic weaponization of story-telling they divide & become divisive. ok; not all. Yet not all whitewash those who do not

In these exciting & mesmerizing orations, who & what is powering the enriching philosophical narratives in a methodological manner for the young, old, the initiated, the outlier or the ignorant? 

Here, in resonance with McCarthy, philosophy (axiology) comes in as practically as mathematics. These  imply beauty & complexity of a balancing opportunity which are not debasing technological creativity. This transdisciplinarity enables humanity. 

Nevertheless Bertrand Russell probably answered the question as to why Axiology is paid lip service yet is kept at bay over again: ““Men fear thought as they fear nothing else on Earth” (1916)


Reference

McCarthy, J., Hayes, P.J. (1969). Some Philosophical Problems from the Standpoint of Artificial Intelligence. In B. Meltzer and D. Michie. (eds). Machine Intelligence 4, 463–502. Edinburgh University Press
http://jmc.stanford.edu/articles/mcchay69/mcchay69.pdf

OR

McCarthy, J., & Hayes, P. J. (1981). Some Philosophical Problems from the Standpoint of Artificial Intelligence. In Readings in Artificial Intelligence (pp. 431–450). Elsevier. https://doi.org/10.1016/B978-0-934613-03-3.50033-7