Category Archives: The field of AI

<< Enlightened Techno Dark Ages >>


brooks and meadows,
books and measurements
where the editor became the troll

it was there around the camp fire or under that tree at the tollgate gasping travelers scrambling a coin grasping a writer’s breath for a review piercing with needle daggers of cloaked anonymity

are the wolves circling at the edge of the forest
as overtones to the grass crisp dew in morning of a fresh authentic thought

is the troll appearing from beneath the bridge expected and yet not and yet there it is truthful in its grandness grotesqueness loudness

the troll phishing gaslighting ghosting and not canceling until the words have been boned and the carcass is feasted upon

spat out you shall be wordly traveler blasted with conjured phrases of bile as magically as dark magic may shimmer shiny composition

the ephemeral creature wants not truth it wants riddle and confuse not halting not passing no period no comma nor a dash of interjection connection nor humane reflection

at the bridge truth is priced as the mud on worn down feet recycled hashed and sprinkled in authoritative tone you shall not pass

confusing adventure protector gatekeeper with stony skin clubs and confabulating foam Clutch Helplessly And Tremblingly Grab Pulped Truths from thereon end real nor reason has not thy home: as it ever was, so it shall be.

A bird sings its brisk tune.

—animasuri’23

Perverted note taking:

Peter A. Fischer, Christin Severin (15.01.2023, 06.30 Uhr). WEF-Präsident Børge Brende zu Verschwörungsvorwürfen: «Wir werden die Welt definitiv nicht regieren». retrieved 16 January 2023 from https://www.nzz.ch/wirtschaft/wef-praesident-borge-brende-wir-werden-die-welt-definitiv-nicht-regieren-ld.1721081 (with a thank you to Dr. WSA)

<< I Don't Understand >>

“What is a lingo-futurist?,” you ask?

It is a fictional expert who makes predictions
about the pragmatics and shifts in social connotations of a word.

Here is one such prediction by a foremost lingo-futurist:

“2023 will be the year where ‘understand’ will be one of the most contested words.

No longer will ‘understand’ be understood with understanding as once one understood.

Moreover, ‘I don’t understand’ will increasingly —for humans— mean ‘I disapprove’ or, for non-human human artifacts, ‘the necessary data was absent from my training data.’

‘Understand’, as wine during recession, will become watered-down making not wine out of water yet, water out of wine, while hyping the former as the latter.

All is well, all is fine wine, you understand?”

—animasuri’23

<< Value Misalignment >>

The “neutrality” of a technology has been triggering thought for some time now. I thought this visual from 1868 by the artist Vinzenz Katzler was delightful to add the text balloon to. –animasuri’22

artist & source

AI systems that are incredibly good at achieving something other than what we really want … AI, economics, statistics, operations research, control theory all assume utility to be exogenously specified” — Russell, Stuart 2017

<< Creating Malware: Technology as Alchemy? >>

Engineering —in a naive, idealized sense— is different from science in that it creates (in)tangible artifacts, as imposed & new realities, while answering a need

It does so by claiming a solution to a (perceived) problem that was expressed by some (hopefully socially-supportive) stakeholders. Ideally (& naively), the stakeholders equal all (life), if not a large section, of humanity

Who’s need does ChatGPT answer when it aids to create malware?

Yes, historically the stakeholders of engineering projects were less concerned with social welfare or well-being. At times (too often), an engineered deliverable created (more) problems, besides offering the intended, actual or claimed solution.

What does ChatGPT solve?

Does it create a “solution” for a problem that is not an urgency, not important and not requested? Does its “solution” outweigh its (risky / dangerous) issues sufficiently for it to be let loose into the wild?

Idealized scientific methodology –that is, through a post-positivist lens– claims that scientific experiments can be falsified (by third parties). Is this to any extent enabled in the realm of Machine Learning and LLMs; without some of its creators seen blaming shortcomings on those who engage in falsification (i.e., trying to proverbially “break” the system)? Should such testing not have been engaged into (in dialog with critical third parties), prior to releasing the artifact into the world?

Idealized (positivist) scientific methodology claims to unveil Reality (Yes, that capitalized R-reality that has been and continues to be vehemently debated and that continues to evade capture). The debates aside, do ChatGPT, or LLMs in general, create more gateways to falsity or tools towards falsehood, rather than toward this idealized scientific aim? Is this science, engineering or rather a renaissance of alchemy?

Falsity is not to be confused with (post-positivist) falsification nor with offering interpretations, the latter which Diffusion Models (i.e., text2pic) might be argued to be offering (note: this too is and must remain debatable and debated). However, visualization AI technology did open up yet other serious concerns, such as in the realm of attribution, (data) alienation and property. Does ChatGPT offer applicable synthesis, enriching interpretation, or rather, negative fabrication?

Scientific experiment is preferably conducted in controlled environments (e.g., a lab) before letting its engineered deliverables out into the world. Realtors managing ChatGPT or recent LLMs do not seem to function within the walls of this constructed and contained realm. How come?

Business, state incentives, rat races, and financial investments motivate and do influence science and surely engineering. Though is the “democratization” of output from the field of AI then with “demos” in mind, or rather yet again with ulterior demons in mind?

Is it then too farfetched to wonder whether the (ideological) attitudes surrounding, and the (market-driven) release of, such constructs is as if a ware with hints, undertones, or overtones, of maliciousness? If not too outlandish an analogy, it might be a good idea to not look, in isolation, at the example of a technology alone.

<< Fair Technologies & Gatekeepers at the Fair >>

As summarized by Patton, Sirotnik pointed at the importance of “equality, fairness and concern for the common welfare.” (1997, 1990) This is on the side of processes of evaluation, and that of the implementation of interventions (in education), through the participation by those who will be most affected by that what is being evaluated. These authors, among the many others, offer various insights into practical methods and forms of evaluation; some more or less participatory in nature.

With this in mind, let us now divert our gaze to the stage of “AI”-labeled research, application, implementation and hype. Let us then look deeper into its evaluation (via expressing ethical concern or critique).

“AI” experts, evaluators and social media voices warn and call for fair “AI” application (in society at large, and thus also into education).

These calls occur while some are claiming ethical concerns related to fairness. Others are dismissing these concerns in combo with discounting the same people who voice such concerns. For an outsider, looking in on the public polemics, it might seem as “cut throat”. And yet, if we are truly concerned about ethical implications, and about answering needs of peoples, this violent image and the (de)relational acts it collects in its imagery, have no place. Debate, evaluate, dialog: yes. Debase, depreciate, monologue: no. Yes, it does not go unnoticed that a post as this one initially runs as a monologue, only quietly inviting dialog.

So long as experts, and perhaps any participant alike, are their own pawns, they are also violently placed according to their loyalties to very narrow tech formats. Submit to the theology of the day and thy shall be received. Voices are tested, probed, inducted, stripped or denied. Evaluation as such is violent and questionably serves that “common welfare.” (Ibid) Individuals are silenced, or over-amplified, while others are simply not even considered worthy to be heard, let alone tried to be understood. These processes too have been assigned to “AI”-relatable technologies (i.e., algorithmic designs and how messages are boosted or muted). So goes the human endeavor. So goes the larger human need in place of the forces of market gain and loudest noises on the information highways which we cynically label as “social.”

These polemics occur when in almost the same breathe this is kept within the bubble of the same expert voices: the engineer, the scientist, the occasional policymaker, the business leader (at times narrated as if in their own echo-chambers). The experts are “obviously” “multidisciplinary.” That is, many seem, tautologically, from within the fields of Engineering, Computer Science, Cognitive Science, Cybernetics, Data Science, Mathematics, Futurology, (some) Linguistics, Philosophy of Technology, or variations, sub-fields and combinations thereof. And yet, even there, the rational goes off the rails, and theologies and witches become but distastefully mislabeled dynamics, pitchforking human relations.

Some of the actors in these theatrical stagings have no industrial backing. This is while other characters are perceived as comfortably leaning against the strongholds of investors, financial and geopolitical forces or mega-sized corporations. This is known. Though, it is a taboo (and perhaps defeatist) to bundle it all as a (social) “reality.” Ethical considerations on these conditions are frequently hushed or more easily burned at the stake. Mind you, across human history, this witch never was a witch, yet was evaluated to better be known as a witch; or else…

In similarity to how some perceive the financially-rich –as taking stages to claim insight in any aspects of human life, needs, urgency, and decisions on filters of what is important– so too could gatekeepers in the field of “AI,” and its peripheries (Symbolic, ML or Neuro-symbolic), be perceived to be deciding for the global masses what is needed for these masses, and what is to be these masses’ contribution to guiding the benefit of “AI” applications for all. It’s fair and understandable that such considerations start there where they with insight wander. It is also fair stating that the “masses” can not really be asked. And yet, who then? When then? How then? Considering the proclaimed impacts of output from the field of “AI,” is it desirable that the thinking and acting stays there where the bickering industry and academic experts roam?

Let us give this last question some context:

For instance, in too broad and yet more concrete strokes: hardly any from the pre-18-year-old generations are asked, let alone prepared to critically participate in the supposedly-transformational thinking and deployment of “AI” related, or hyped, output. This might possibly be because these young humans are hardly offered the tools to gain insight, beyond the skill of building some “AI”-relatable deliverable. The techno-focused techniques are heralded, again understandably (yet questioned as universally agreeable), as a must-have before entering the job market. Again, to be fair, sprinkled approaches to critical and ethical thinking are presented to these youngsters (in some schools). Perhaps a curriculum or two, some designed at MIT, might come to mind. Nevertheless, seemingly, some of these attributes are only offered as mere echoes within techno-centrist curricula. Is this an offering that risks flavoring learning with ethics-washing? Is the (re)considering where the voices of reflection come from, are nurtured and are located, a Luddite’s stance? As a witch was, it too could be easily labeled as such.

AI applications are narrated as transforming, ubiquitous, methodological, universal, enriching and engulfing. Does ethical and critical skill-building similarly meander through the magical land of formalized learning? Ethical and critical processes (including computational thinking beyond “coding” alone), of thought and act, seem shunned and feared to become ubiquitous, methodological, enriching, engulfing or universal (even if diverse, relative, depending on context, or depending on multiple cultural lenses). Is this a systemic pattern, as an undercurrent in human societies? At times it seems that they who court such metaphorical creek are shunned as the village fool or its outskirt’s witch.

Besides youth, color and gender see their articulations staged while filtered through the few voices that represent them from within the above-mentioned fields. Are then more than some nuances silenced, yet scraped for handpicked data, and left to bite the dust?

Finally, looking at the conditions in further abstract (with practical and relational consequences): humans are mechanomorphized (i.e., seen as machines in not much more complexity than human-made machinery, and as an old computer dismissed if no longer befitting the mold or the preferred model). Simultaneously, while pulling in the opposite direction, the artificial machines are being anthropomorphized (i.e., made to act, look and feel as if of human flesh and psyche). Each of their applied, capitalized technique is glorified. Some machines are promoted (through lenses of fear-mongering, one might add) to be better at your human job or to be better than an increasing number of features you might perceive as part of becoming human(e).

Looking at the above, broadly brushed, scenarios:

can we speak of fair technologies (e.g. fair “AI” applications) if the gatekeepers to fairness are mainly those who create, or they who sanction the creation of the machinery? Can we speak of fair technologies if they who do create them have no constructive space to critique, nor evaluate via various vectors, and construct creatively through such feedback loops of thought-to-action? Can we speak of fair technology, or fair “AI” applications, if they who are influenced by its machinery, now and into the future, have few tools to question and evaluate? Should fairness, and its tools for evaluation, be kept aside for evaluation by the initiated alone?

While a human is not the center of the universe (anymore / for an increasing number of us), the carefully nurtured tools to think, participatorily evaluate and (temporarily) place the implied transformations, are sorely missing, or remain in the realm of the mysterious, the magical, witches, mesmerizations, hero-worship or feared monstrosities.

References:

Patton, M.Q. (1997). Intended Process Uses: Impacts of Evaluation, Thinking and Experiences. IN: Patton, M.Q. Utilization-focused Evaluation: The New Century Text (Ch. 5 pp 87 – 113). London: Sage.

Sirotnik, Kenneth A. (eds.). (1990). Evaluation and Social Justice: Issues in Public Education. New Directions for Program Evaluation, No. 45. San Francisco: Jossey-Bass.

In The Age Of Information

In the Age of Information, the Age of Reason has been surpassed, signaling the return to finding meaning —confused as “knowledge”— in mesmerization, fad, hype, snake oil and data as snowflakes, moldable in any shape one desires, and quickly diffused, convolutedly, in the blinding Sun. In erotic dance this Age of Information copulates with the Age of Sharing giving its offspring heads to bump in shared gaseous dogmas.

                           ---animasuri’22


<< Transition By Equation >>

focus pair: Mechanomorphism | Anthropomorphism

One could engage in the following over-simplifying, dichotomizing and outrageous exercise:

if we were to imagine that our species succeeded in collectively transforming humanity, that is, succeeding in how the species perceives its own ontological being as one of:

“…we are best defined and relatable through mechanomorphic metaphors, mechanomorphic self-images, mechanomorphic relations and datafying processes,”

At that imaginary point, any anthropomorphism (as engine for designs or visionary aims) within technologies ( and that with a unique attention to those associated with the field of “AI”) might be imagined to be(come) empowered, enabled or “easier” to be accomplished with mechanomorphized “humans.”

In such imagination, the mechanomorphized human, with its flesh turned powerless and stale, and its frantic fear of frailty, surrenders.

It could be imagined being “easy,” & this since the technology (designer) would “simply” have to mimic the (human as) technology itself: machine copies machine to become machine.

Luckily this is an absurd imagination as much as Guernica is forgettable as “merely” cubistic surrealism.

<< Not Condemning the Humane into a Bin of Impracticality >>


There’s a tendency to reassign shared human endeavors into a corner of impracticality, via labels of theory or thing-without-action-nor-teeth: Philosophy (of science & ethics), art(ists),(fore)play, fiction, IPR, consent & anything in-between measurability of 2 handpicked numbers. Action 1: Imagine a world without these. Action 2: Imagine a world only with these.

Some will state that if it can’t be measured it doesn’t exist. If it doesn’t exist in terms of being confined as a quantitative pool (e.g. data set) it can be ignored. Ignoring can be tooled in a number of ways: devalue, or grab to revalue through one’s own lens on marketability.

(re-)digitization, re-categorization, re-patterning of the debased, to create a set for remodeled reality, equals a process that is of “use” in anthropomorphization, and mechanomorphization: a human being is valued as datasets of “its” output, e.g., a mapping of behavior, results of an (artistic or other multimodal) expression, a KPI, a score.

While technology isn’t neutral, the above is neither singularly a technological issue. It is an ideologically systematized issue with complexity and multiple vectors at play (i.e. see above: that what seems of immediate practicality, or that what is of obvious value, is not dismissed).

While the scientific methods & engineering methods shouldn’t be dismissed nor confused, the humans in their loops aren’t always perceiving themselves as engines outputting discrete measurables. Mechanomorphism takes away the “not always” & replaces it with a polarized use vs waste

Could it be that mechanomorphism, reasonably coupled with anthropomorphism, is far more a concern than its coupled partner, which itself is a serious process that should also allow thought, reflection, debate, struggle, negotiation, nuance, duty-of-care, discernment & compassion?

epilogue:

…one could engage in the following over-simplifying, dichotomizing and outrageous exercise: if we were to imagine that our species succeeded in collectively transforming humanity (as how the species perceives its own ontological being) to be one of “we are best defined and relatable through mechanomorphic metaphors, relations and datafying processes,” then any anthropomorphism within technologies (with a unique attention to those associated with the field of “AI”) might be imagined to be(come) easier to be accomplished, since it would simply have to mimic itself: machine copies machine to become machine. Luckily this is absurd as much as Guernica is cubistically surreal.

Packaging the above, one might then reread Robert S. Lynd’s words penned in 1939: “…the responsibility is to keep
everlastingly challenging the present with the question: But what is it that we human beings want, and what things would have to be done, in what ways and in what sequence, in order to change the present so as to achieve it?”

(thank you to Dr. WSA for triggering this further imagination)

Lynd, R. S. (1939). Knowledge For What?. Princeton: Princeton University Press