Tag Archives: AI

<< A Language of Techno Democratization >>


“What would be ‘democratization of a technology,’ if it were to come at the cost of a subset of the population?”

The above is structured as a second conditional.

And yet, an “innovative” grammatical invitation could be one where it is implied one is at all times free to test whether the attributes of the second conditional could yield some refreshing thought (for oneself) when substituting its “would” away from the hypothetical and for “is to,” and “if it were” for “when it is.” In effect, if one were not, one might (not be) wonder(ing) about one’s creative or imaginative freedom.

What is to be ‘democratization of a technology’  when it is to come at the cost of a subset of the population?


At times I enjoy seeing grammar and syntax as living entities that offer proverbial brushes and palettes of some iterative flexibility and to some fluid extent. Not too much, nor at all times, yet not rigidly absent either. 

However, more so, I’d like to consider them/they, which a sentence’s iterations trigger me to think of. I want to consider some of their plight. When I’m more narcissistic I might do so less. When I wonder about my own subsistence (especially when I am sofa-comfortable) I might so less. Then there is that question, lingering, how are they faring, and there is that question as to how is my immediate (non)act, or (long-term) vision, affecting them?  What do they themselves have to say about x?

Grammar and syntax then become, to me, teleportation engines into the extended re-cognition of me, myself and I, relationally with others. It might be compassion. It might be empathy. It might unveil the insufficient probability thereof. It might highlight the socially acceptable, self-indulgent, self-commendation checkbox-listing. It might be an echo of some non-computable entanglement. It might also be my poetic pathos in dance with my imagination. It is grammar and syntax, and then some.

I love languages and their systems, almost as much as I love life and its many subsystems. Does this mechanized word choice, i.e., “subsystem,” disassociate a care for the other, away from that other? It does not have to. And yet, it might suggest yet another attribute, adding to a perceived increased risk of dissociation between you and I. Entangled, and yet in solitude (not to be confused with “loneliness”). 

Note, I do not confuse this ode to language and to the other, with implying an absence of my ignorance of the many changing and remaining complexities in language and in (the other’s) physically being with the worlds. There is no such absence at all. I know, I am ignorant. 

The above two versions of the question might read as loaded or weighted. Yes. …Obviously? 

““What ____ ‘democratization of a technology’  ______ come at the cost of a subset of the population?

The above two, and their (almost/seeming) infinite iterations, allow me a telepresence into an imaginary multiverse. While this suggests a pluralism, it does not imply a relativism; to me. 

And yes, it is understandable, when the sentence is read through the alert system of red flags, klaxons and resentment: it will trigger just that: heightened alertness, de-focusing noise and debasing opposition. Ideological and political tones are possibly inevitable. These interpreted inevitabilities are not limited to “could” or “would” or “is” and its infinitive “to be” alone.

It could be (/ would be / is) ideological (not) to deny that other implications are at play there. “subset” is one. “population” is another. Their combination implies a weighing of sprinkles of scientific-like lingo. Then there is the qualitative approach versus the lack of the quantitative. In effect, is this writing a Discourse Analysis in (not so much) hiding? 

This is while both the quantitative and qualitative approaches are ((not always) accepted as) validating (scientific) approaches. I perceive these as false dichotomies. Perhaps we engage in this segregation. Perhaps we engage then into the bringing together again, into proverbial rooster-fighting settings. Possibly we do so, so that one can feel justified to ignore various methods of analysis, in favor of betting on others or another. Or, perhaps, in fear of being overwhelmed.

Favoritism is a manner to police how we construct our lenses on relational reality; i.e., there’s a serious difference between favoring “friendliness” vs “friend.” This creates a piecemeal modeling without much further association and relating into the world and with other makers of worlds. This is especially toward they who have been muzzled or muted far too long and far too disproportionately, rather then toward they who feel so yet, who might have little historic or systemic arguments to claims.

Whether the set of iterations of this sentence, inevitably has to be (partly) party-political is another question. Wether a (second conditional) sentence could be read as an invitation toward an innovation, is up to you, really. It is to me. To me it brings rhizomic dimensions into a hierarchical power struggle.

And yes, returning to the sentence, arguably “democratization” could be substituted to read “imposition” or another probabilistically-viable or a more surreal substitute.  

A sentence as the one engineered for this write-up, invites relationship. Whether we collectively and individually construct the form and function of our individual “self,” our individual relationships, and these then extended, extrapolated and delegated as re-cognitions,  into small, medium, large or perceived as oversized processes, is one up for debate. To me they’re weighted in some directions, not irrelevant here to more explicitly identify these. I tend to put more weight on the first and surely the second while not excluding the third when considering the systemic issues, the urgently needed, and then thirdly, the hypothetically desirable.

Though as I am writing this, one might interpret my stance more weighted in one direction versus another. Neither here, I shall not yet indulge an explicit confirmation. After all, there are both the contexts and subtexts. Why am I writing about this in this way, here and now? Why am I not mentioning other grammatical attributes or syntactical attributes? Why “technology,” and why not “daffodils”? What of using or substituting articles (e.g., “a,” “the”)? What else have I written elsewhere at other times? Who seems to endorse and (what seems to) enable my writing? What if I were to ask myself these questions and tried to furbish the sentence to satisfy each possible answer?

A sentence “What ______ ‘democratization of a technology’  _____ to come at the cost of _________?”  could then be a bit like an antique chair: over time and across use, mending, refitting, refurbishing and appropriation.  And before we duly imagine it, having pulled all its initial nuances from its sockets, having substituted one for another probabilistic option within an imposed framework. Having substituted and then compounded all, we could collectively flatline our antique chair-like sentences to


“_______________________________” 


With this version of the sentence there is neither pluralism, nor relativism, and no need for any nihilism. It is a grammatical and syntactic mechanized absolute minimalism.

Then perhaps we could collectively delegate the triggering line to a statistical model of what we must know, what could, should, would and is: to (never) be. 

Welcome to the real. Enjoy your ________.  Welcome to the _________. Here’s a ________ to proudly tattoo on our left lower arm.

<< Six Fingers as Six Sigma >>

Some Reflections on Artificially-Generated Content (AGC) Based on Synchronously-occurring News Cycles and Suffering

The concept of “Six Sigma” is related to processes to identify error and to reduce defects. It is a method aimed at improving processes and their output. In this context, ‘Six Fingers,’ is an artifact found in visual Artificially-Generated Content (AGC). Identifying attributes to aid a critical view on AGC could, to some extent, allow the nurturing of a set of tools in support of well-being and in considering the right action to take, perhaps aiding the human processes of being human and becoming (even more) human(e)…

Could/should I identify errors or features in AGC that identify a piece of AGC as AGC? Can we humans identify AGC with our own senses? For how much longer? Do we want to identify it? Are there contexts within which we want to identify AGC more urgently than in other contexts; e.g. highly emotionally-loaded content that occurs in one’s present-day news cycles, or where the AGC is used to (emotionally) augment, or create a sensation of being, present-day news? What are the ethical implications?

This first post tries to bring together some of my initial thoughts and some of those I observed among others. Since this is a rather novel area surely errors in this post can be identified and improvements on this theme by others (or myself) could surely follow.

Let me constrain this post by triggering some examples of some visual AGC

A common visual attribute in the above are the hands with (at least) six fingers. The six-fingers, at times found in graphic Generative AI output (a subset of AGC), are an attribute that reoccurs in this medium and yet, is one that is possibly disappearing as the technology develops over time.

For now, it has been established by some as a tool to identify hyper realistic imagery, generated of an imaginable event ,that statistically could occur and could have a probability to occur in the tangible realm of human events and experiences. This is while fantastical settings can as easily be generated that include six or more fingers.

And then, then there are artificial renditions of events that are shown in the news cycles. These events occur almost in synchronization with the artificial rendition that could follow. I am prompted by the above visuals which are a few of such artificial renditions. Some of these creations are touching on the relations and lives of actual, physical people. This latter category of artificial renditions is especially sensitive since it is intertwined with the understandable emotional and compassionate weight we humans attach to some events as they are occurring.

For now, the six fingers, among a few other and at times difficult to identify attributes, allow heuristic methods for identification. Such process of judgement inherently lacks the features of the more constrained and defined scientific techniques, or mathematical accuracy. In effect, the latter is one of those categories for identification. Some realistic renditions are not just realistic, they are hyper-realistic. For instance, it is possible that some smudges of dirt on someone’s face might just seem a bit uncanny in their (at times gruesome) graphic perfection. Moreover, by comparing generated images of this sort, one can start to see similarities in these “perfections.”

This, albeit intuitive identification of attributes, as a set of tools, can enable one to distinguish the generated visuals from the visuals found in, say, (digital) photographs taken from factual, tangible humans and their surrounding attributes. Digital photos (and at times intuitively more so analog photos) found their beginnings in capturing a single event, occurring in the non-digital realm. In a sense, digital photos are the output of a digitization process. AI technology-generated realistic imagery are not simply capturing the singular (or are not, compared to the ever so slightly more direct sense with the data collected by means of digital photography).

Simultaneously, we continue to want to realize that (analog or digital) photography too can suffer from error and manipulation. Moreover it too is very open to interpretation (i.e., via angle, focus, digital retouching, and other techniques). Note, error and interpretation are different processes. So too are transparency and tricking consumers of content, different processes. In the human process of (wanting to) believe a story, the creator of stories might best know that consumers of content are not always sharply focused, nor always on the look out for little nuances that could give away diminished holistic views of the depicted and constructed reality or+and realities. Multi-billion dollar industries and entire nations are built on this very human and neurological set of facts: what we believe to see or otherwise believe to sense is not what is necessarily always there. Some might argue this is a desirable feature. This opens up many venues for discussion, some of which are centuries old and lead us into metaphysics, ontology, reality and+or realities.

Reverting back to digits as fingers. In generated imagery the fingers are one attribute that, for now, can aid to burst the bubble (if such bubble needs bursting). The anatomy of the hand (and other attributes), e.g., the position, length of the thumb as well as, when zoomed-in, the texture of the skin can aid in creating doubt toward the authenticity of a “photo.” The type of pose and closed eyes also reoccur in other similar generated visuals can aid in critically questioning the visual. The overall color theme and overall texture are a third set of less obvious attributes. The additional text and symbols (i.e., the spelling, composition or texture of the symbol’s accompanying text, their position, the lack of certain symbols or the choice of symbols (due to their suggestive combination with other symbols), versus the absence or versus the probability of combination of symbols) could, just perhaps and only heuristically, create desirable doubt as well.

We might want to establish updated categorizations (if these do not already exist somewhere) that could aid they who wish to learn to see and to distinguish types of AGC, or types of content sources, with a specific focus on Generative AI. At the same time, it is good remembering that this is difficult due to the iterative nature of that what is aimed to be categorized: technology, and thus its output, upgrade often and adapt quickly.

Nevertheless, it could be a worthy intent, identifying tricks for increasing possible identification by humans in human and heuristic ways. For instance, some might want to become enabled to ask and (temporarily) answer: is it or is it not created via Generative AI?; Or, as it has occurred in the history of newly-introduced modalities of content generating media; e.g. the first films: is it, or is it not, a film of a train, or rather an actual train, approaching me? Is it, or is it not, an actual person helping someone in this photo? Or+and is this a trigger feeding on my emotions, which can aid me to take (more or less) constructive action? And do I, at all, want to care about these distinctions (or is something else more important and urgent to me)?

As with other areas in human experiences (e.g. the meat industry versus the vegetable-based “meats” versus the cell-based lab-generated meats: some people like to know whether the source of the juiciness of the beef steak, which they are biting in, comes from an animal or does not come from an animal, or comes from a cell-reproducing and repurposing lab. (side-note: I do not often eat meat(-like) products; if any at all). A particularly 1999 science fiction film plays with this topic of juicy, stimulating content as well; The Matrix. This then in turn could bring us to wonder about ontological questions of the artificial, the simulation, and of simulacra.

Marketing, advertising, PR, rhetoric and narration tools, and intents, can aid in making this more or far less transparent. Sauce this over with variations in types of ethics and one can image a sensitive process in making, using and evaluating an artificially generated hyper-real image.

Indeed, while the generated sentiment in such visuals could be sensed as important and as real (by they who sense it and they who share it), we might still want to tread with care. If such artificial generation is on a topic that is current and is on a topic of, for instance, a natural disaster, then the clarity of such natural disaster as being very tangibly real, makes it particularly unsettling for us humans. And yet, for instance, natural disasters affecting lives and communities, are (most often) not artificially generated (while some might be and are human-instigated). The use of artificial attributes toward that what is very tangible, might to some, increase distrust, desensitization, apathy or a sense of dehumanization.

Then there is the following question: why shall I use an artificially generated image instead of using one that is from actual people and (positive) actors, actually aiding in an actual event? It is a fair question to ponder as to unveil possible layers of artificial narrative design, implied in the making of a visual, or other, modality. So, then, what with the actual firefighter who actually rescued an actual child? Where is her representation to the world and in remembrance or commemoration?

Granted, an artificial image could touch on issues or needs relatable to a present-day event in very focused and controlled manners; such as the call for cooperation. It can guide stimulating emotion-to-positive-action. It is also fair to assume that the sentiment found with such visual can remind us that we need and want to work together, now and in the future, and  see actual humans relate with other humans while getting the urgent and important work done, no matter one’s narrated and believed differences generated through cultural constructs (digital, analog, imagined or otherwise imbued).

Simultaneously, we could also try to keep it real and humble, and still actionable. Simultaneously it is possible to tell the story of these physical, tangible and relational acts not by artificially diminishing them, and simultaneously we can commemorate and trigger by means of artificially generated representations of what is happening as we speak. Then a question might be: is my aim with AGC to augment, confuse, distract, shift attention, shift the weight, change the narrative, commemorate, etc?

Symbols are strong. For instance that of a “firefighter,” holding a rescued child with the care a loving mother, and father, would unconditionally offer. Ideally we each are firefighters with a care-of-duty. These images can be aiding across ideologies, unless, and this is only hypothetical of course, such imagery were used via its additionally placed symbols, to reinforce ideological tension or other ideological attributes while misusing or abusing human suffering. Is such use then a form of misinformation or even disinformation? While an answer here is not yet clear to me, the question is a fair question to ask intertwined with a duty-of-care.

Hence, openness of, and transparency toward, attribution of the one (e.g,, we can state it is a “generated image” more explicitly than “image” versus “photo”) does not have to diminish integrity of the other (e.g., shared human compassion via shared emotion and narration), nor of on-the-ground, physical action by actual human beings amidst actual disaster-stricken communities, or within other events that need aid. How can I decrease the risk that the AGC could diminish (to some) consumers of the aimed at AGC?

The manner of using Artificially Generated Content (AGC) is still extremely new and very complex. We collectively need time to figure out its preferred uses (if we were to want to use these at all). Also in this we can cross “borders” and help each other in our very human processes of trial and error.

This can be offered by balancing ethos (ethics, duty-of-care, etc.), pathos (emotion, passion, compassion, etc.) and logos (logic, reason, etc.) and now, perhaps more than ever, also techne (e.g., Generative AI and its AGC). One can include the other in nuanced design, sensibility, persistence, duty of care, recognition, and action, even and especially in moments of terrible events.

Expanding on this topic of artificially generated narration with positive and engaging aims, I for one wouldn’t mind seeing diversity in gender roles and (equity via) other diversities as well in some of these generated visuals of present-day events.

Reverting back to the artificial, if it must be, then diversity in poses, skin colors and textures as well, would be more than only “nice.” And yet, someone might wonder, all fine-tuning and nuancing might perhaps decreases the ability to distinguish the digitally-generated (e.g. via data sets, a Generative AI system and prompting), from the digitized and digitally captured (e.g. a digital photo). The previous is, if the data set is not socially biased. Herein too technology and its outputs are not neutral.

If the aim with a generated visual (and other modalities of AGC) of a present-day, urgent, important and sensitive event, is to stimulate aid, support, compassion, constructive relations, positive acts, inclusiveness (across ideology, nation and human diversities), then we could do so (while attributing it clearly). We can then also note that this is even if one thinks one does not need to, and one thinks one is free to only show generated attributes derived from traditional, European, strong male narratives. Or+and, we could do so much more. We could, while one does not need to exclude the other, be so much more nuanced, more inclusive and increase integrity in our calls-to-action. Generated imagery makes that possible too; if its data set is so designed to allow it.

reference

https://france3-regions.francetvinfo.fr/bretagne/ille-et-vilaine/rennes/intelligence-artificielle-ses-photos-faites-par-ia-font-le-tour-du-monde-2711210.html

note

it was fairly and wittily pointed out (on LinkedIn) that “Six Fingers” in this context is not to be confused as being a critique on the human imagination via fairy tales (e.g.: “–Inigo Montoya : I do not mean to pry, but you don’t by any chance happen to have six fingers on your right hand? –Man in Black : Do you always begin conversations this way?”) nor as a denial or acceptance of human diversity such as classified as (human) polydactyly.


source


<< Enlightened Techno Dark Ages >>


brooks and meadows,
books and measurements
where the editor became the troll

it was there around the camp fire or under that tree at the tollgate gasping travelers scrambling a coin grasping a writer’s breath for a review piercing with needle daggers of cloaked anonymity

are the wolves circling at the edge of the forest
as overtones to the grass crisp dew in morning of a fresh authentic thought

is the troll appearing from beneath the bridge expected and yet not and yet there it is truthful in its grandness grotesqueness loudness

the troll phishing gaslighting ghosting and not canceling until the words have been boned and the carcass is feasted upon

spat out you shall be wordly traveler blasted with conjured phrases of bile as magically as dark magic may shimmer shiny composition

the ephemeral creature wants not truth it wants riddle and confuse not halting not passing no period no comma nor a dash of interjection connection nor humane reflection

at the bridge truth is priced as the mud on worn down feet recycled hashed and sprinkled in authoritative tone you shall not pass

confusing adventure protector gatekeeper with stony skin clubs and confabulating foam Clutch Helplessly And Tremblingly Grab Pulped Truths from thereon end real nor reason has not thy home: as it ever was, so it shall be.

A bird sings its brisk tune.

—animasuri’23

Perverted note taking:

Peter A. Fischer, Christin Severin (15.01.2023, 06.30 Uhr). WEF-Präsident Børge Brende zu Verschwörungsvorwürfen: «Wir werden die Welt definitiv nicht regieren». retrieved 16 January 2023 from https://www.nzz.ch/wirtschaft/wef-praesident-borge-brende-wir-werden-die-welt-definitiv-nicht-regieren-ld.1721081 (with a thank you to Dr. WSA)

<< I Don't Understand >>

“What is a lingo-futurist?,” you ask?

It is a fictional expert who makes predictions
about the pragmatics and shifts in social connotations of a word.

Here is one such prediction by a foremost lingo-futurist:

“2023 will be the year where ‘understand’ will be one of the most contested words.

No longer will ‘understand’ be understood with understanding as once one understood.

Moreover, ‘I don’t understand’ will increasingly —for humans— mean ‘I disapprove’ or, for non-human human artifacts, ‘the necessary data was absent from my training data.’

‘Understand’, as wine during recession, will become watered-down making not wine out of water yet, water out of wine, while hyping the former as the latter.

All is well, all is fine wine, you understand?”

—animasuri’23

<< Creating Malware: Technology as Alchemy? >>

Engineering —in a naive, idealized sense— is different from science in that it creates (in)tangible artifacts, as imposed & new realities, while answering a need

It does so by claiming a solution to a (perceived) problem that was expressed by some (hopefully socially-supportive) stakeholders. Ideally (& naively), the stakeholders equal all (life), if not a large section, of humanity

Who’s need does ChatGPT answer when it aids to create malware?

Yes, historically the stakeholders of engineering projects were less concerned with social welfare or well-being. At times (too often), an engineered deliverable created (more) problems, besides offering the intended, actual or claimed solution.

What does ChatGPT solve?

Does it create a “solution” for a problem that is not an urgency, not important and not requested? Does its “solution” outweigh its (risky / dangerous) issues sufficiently for it to be let loose into the wild?

Idealized scientific methodology –that is, through a post-positivist lens– claims that scientific experiments can be falsified (by third parties). Is this to any extent enabled in the realm of Machine Learning and LLMs; without some of its creators seen blaming shortcomings on those who engage in falsification (i.e., trying to proverbially “break” the system)? Should such testing not have been engaged into (in dialog with critical third parties), prior to releasing the artifact into the world?

Idealized (positivist) scientific methodology claims to unveil Reality (Yes, that capitalized R-reality that has been and continues to be vehemently debated and that continues to evade capture). The debates aside, do ChatGPT, or LLMs in general, create more gateways to falsity or tools towards falsehood, rather than toward this idealized scientific aim? Is this science, engineering or rather a renaissance of alchemy?

Falsity is not to be confused with (post-positivist) falsification nor with offering interpretations, the latter which Diffusion Models (i.e., text2pic) might be argued to be offering (note: this too is and must remain debatable and debated). However, visualization AI technology did open up yet other serious concerns, such as in the realm of attribution, (data) alienation and property. Does ChatGPT offer applicable synthesis, enriching interpretation, or rather, negative fabrication?

Scientific experiment is preferably conducted in controlled environments (e.g., a lab) before letting its engineered deliverables out into the world. Realtors managing ChatGPT or recent LLMs do not seem to function within the walls of this constructed and contained realm. How come?

Business, state incentives, rat races, and financial investments motivate and do influence science and surely engineering. Though is the “democratization” of output from the field of AI then with “demos” in mind, or rather yet again with ulterior demons in mind?

Is it then too farfetched to wonder whether the (ideological) attitudes surrounding, and the (market-driven) release of, such constructs is as if a ware with hints, undertones, or overtones, of maliciousness? If not too outlandish an analogy, it might be a good idea to not look, in isolation, at the example of a technology alone.

<< Not Condemning the Humane into a Bin of Impracticality >>


There’s a tendency to reassign shared human endeavors into a corner of impracticality, via labels of theory or thing-without-action-nor-teeth: Philosophy (of science & ethics), art(ists),(fore)play, fiction, IPR, consent & anything in-between measurability of 2 handpicked numbers. Action 1: Imagine a world without these. Action 2: Imagine a world only with these.

Some will state that if it can’t be measured it doesn’t exist. If it doesn’t exist in terms of being confined as a quantitative pool (e.g. data set) it can be ignored. Ignoring can be tooled in a number of ways: devalue, or grab to revalue through one’s own lens on marketability.

(re-)digitization, re-categorization, re-patterning of the debased, to create a set for remodeled reality, equals a process that is of “use” in anthropomorphization, and mechanomorphization: a human being is valued as datasets of “its” output, e.g., a mapping of behavior, results of an (artistic or other multimodal) expression, a KPI, a score.

While technology isn’t neutral, the above is neither singularly a technological issue. It is an ideologically systematized issue with complexity and multiple vectors at play (i.e. see above: that what seems of immediate practicality, or that what is of obvious value, is not dismissed).

While the scientific methods & engineering methods shouldn’t be dismissed nor confused, the humans in their loops aren’t always perceiving themselves as engines outputting discrete measurables. Mechanomorphism takes away the “not always” & replaces it with a polarized use vs waste

Could it be that mechanomorphism, reasonably coupled with anthropomorphism, is far more a concern than its coupled partner, which itself is a serious process that should also allow thought, reflection, debate, struggle, negotiation, nuance, duty-of-care, discernment & compassion?

epilogue:

…one could engage in the following over-simplifying, dichotomizing and outrageous exercise: if we were to imagine that our species succeeded in collectively transforming humanity (as how the species perceives its own ontological being) to be one of “we are best defined and relatable through mechanomorphic metaphors, relations and datafying processes,” then any anthropomorphism within technologies (with a unique attention to those associated with the field of “AI”) might be imagined to be(come) easier to be accomplished, since it would simply have to mimic itself: machine copies machine to become machine. Luckily this is absurd as much as Guernica is cubistically surreal.

Packaging the above, one might then reread Robert S. Lynd’s words penned in 1939: “…the responsibility is to keep
everlastingly challenging the present with the question: But what is it that we human beings want, and what things would have to be done, in what ways and in what sequence, in order to change the present so as to achieve it?”

(thank you to Dr. WSA for triggering this further imagination)

Lynd, R. S. (1939). Knowledge For What?. Princeton: Princeton University Press

<< data in, fear & euphoria out >>


A recent New Scientist article stub [5] claims “More than one-third of artificial intelligence researchers around the world agree…”

Following, in this article’s teaser (the remainder seems safely and comfortably behind a paywall) “more than one third” seems equated with a sample of 327 individuals in a 2022 global population of an estimated 7.98 billion [2, 8] (…is that about a 0.000004% of the population?)

This would deductively imply that there are less than 981 AI researchers in a population of 7.98 billion. …is then 0.0000124% of the population deciding for the 100% as to what is urgent and important to delegate “intelligence” to? …surely (not)… ( …demos minus kratos equals…, anyone?)

Five years ago, in 2017, The Verge referenced reports that mention individuals working in the field estimated at totaling 10’000 while others suggested an estimate closer to 300’000 [9] (…diffusioningly deviating).

As an opposing voice to what the 327 individuals are claimed to suggest, there is the 2022 AI Impacts pole [4] which suggests a rather different finding

Perhaps the definitions are off or the estimations are?

When expressing ideas driven by fear, or that are to be feared, one might want to tread carefully. Fear as much as hype & tunnel-visioned euphoria, while at times of (strategic, rhetorical, or investment pitching) “use”, are proverbial aphrodisiacs of populist narratives [1, 3, 6, 7]

Such could harm to identify & improve on the issue or related issues which might indeed be “real”, urgent & important

This is not “purely” a science, technology, engineering or mathematics issue. It is more than that while, for instance, through the lens created by Karl Popper, it is also a scientific methodological issue.

—-•
References:

[1] Chevigny, P. (2003). The populism of fear: Politics of crime in the Americas. Punishment & Society, 5(1), 77–96. https://doi.org/10.1177/1462474503005001293

[2] Current World Population estimation ticker:https://www.worldometers.info/world-population/

[3] Friedrichs, J. (n.d.). Fear-anger cycles: Governmental and populist politics of emotion. (Blog). University of Oxford. Oxford Department of International Development. https://www.qeh.ox.ac.uk/content/fear-anger-cycles-governmental-and-populist-politics-emotion

[4] Grace, K., Korzekwa, R., Mills, J., Rintjema, J. (2022, Aug). 2022 Expert Survey on Progress in AI. Online: AI Impacts. Last retrieved 25 August 2022 from https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/#Extinction_from_AI 

[5] Hsu, J.(2022, Sep).A third of scientists working on AI say it could cause global disaster. Online: New Scientist (Paywall). Last retrieved 24 Sep 2022 fromhttps://www.newscientist.com/article/2338644-a-third-of-scientists-working-on-ai-say-it-could-cause-global-disaster/

[6] Lukacs, J. (2006). Democracy and Populism: Fear and Hatred. Yale University Press. 

[7] Metz, R. (2021, May). Between Moral Panic and Moral Euphoria: Populist Leaders, Their Opponents and Followers. (Event / presentation). Online: The European Consortium for Political Research (ecpr.eu). Last retrieved on 25 September 2022 from https://ecpr.eu/Events/Event/PaperDetails/57114

[8] Ritchie, H., Mathieu, E., Rodés-Guirao, L., Gerber, M.  (2022, Jul). Five key findings from the 2022 UN Population Prospects. Online: Our World in Data. Last retrieved on 20 September 2022 from https://ourworldindata.org/world-population-update-2022

[9] Vincent, J. (2017, Dec). Tencent says there are only 300,000 AI engineers worldwide, but millions are needed. Online: The Verge. Last retrieved 25 Se 2022 from  https://www.theverge.com/2017/12/5/16737224/global-ai-talent-shortfall-tencent-report

—-•

<< Philo-AI AI-Philo >>

The idea of Philosphy is far from new or alien to the field of AI. In effect, a 1969 paper was already proposing “Why Artificial Intelligence Needs Philosophy”

“…it is important for the research worker in artificial intelligence to consider what the philosophers have had to say…” 

…have to say; will have to say

“…we must undertake to construct a rather comprehensive philosophical system, contrary to the present tendency to study problems separately and not try to put the results together…” 

…besides the observation that the “present tendency” is one that has been present since at least 1969, this quote might more hope-inducingly be implying the need for integration & transdisciplinarity

This 1969 paper, calling for philosophy was brought to us by the founder of the field of Artificial Intelligence. Yes. That human that coined the field & name did not shun away from transdisciplinarity

This is fundamentally important enough to be kept active in the academic & popular debates

Note, philosophy contains axiology, which contains aesthetics & ethics. These are after-thoughts in present-day narration that make up some parts of the field of “AI”

Some claim it is not practical. However note, others claim mathematics too is impractical.  Some go as far with the dismissal as to stating that people studying math (which is different from Mathematics) end up with Excel

These debasing narratives, which are also systematized into our daily modes of operation & relation, are dehumanizing

Such downward narration is not rational, & is tinkering with nuances which are not contained by any model to date

Let us further contextualize this

Machine-acts are at times upwardly narrated & hyped as humanized (ie anthropomorphism). Simultaneously human acts are (at times downwardly) mechanized (ie mechanomorphism)

These opposing vectors are let loose into the wild of storytelling while answering at times rather opaque needs, & offering unclear outcomes for technologies, packaged with ideological hopes & marketable solutions. The stories are many. The stories are highly sponsored & iterative.  The stories are powered by national, financing & corporate interest.  ok.  & yet via strategic weaponization of story-telling they divide & become divisive. ok; not all. Yet not all whitewash those who do not

In these exciting & mesmerizing orations, who & what is powering the enriching philosophical narratives in a methodological manner for the young, old, the initiated, the outlier or the ignorant? 

Here, in resonance with McCarthy, philosophy (axiology) comes in as practically as mathematics. These  imply beauty & complexity of a balancing opportunity which are not debasing technological creativity. This transdisciplinarity enables humanity. 

Nevertheless Bertrand Russell probably answered the question as to why Axiology is paid lip service yet is kept at bay over again: ““Men fear thought as they fear nothing else on Earth” (1916)


Reference

McCarthy, J., Hayes, P.J. (1969). Some Philosophical Problems from the Standpoint of Artificial Intelligence. In B. Meltzer and D. Michie. (eds). Machine Intelligence 4, 463–502. Edinburgh University Press
http://jmc.stanford.edu/articles/mcchay69/mcchay69.pdf

OR

McCarthy, J., & Hayes, P. J. (1981). Some Philosophical Problems from the Standpoint of Artificial Intelligence. In Readings in Artificial Intelligence (pp. 431–450). Elsevier. https://doi.org/10.1016/B978-0-934613-03-3.50033-7

<< Boutique Ethic >>

Thinking of what I label as “boutique ethic”, such as AI Ethics, must indeed come with thinking about ethics (Cf. here ). I think this is not only an assignment for the experts. It is also one for me: the layperson-learner.

Or is it?

Indeed, if seen through more-than a techno-centric lens alone, some voices do claim that one should not be bothered with ethics if one does not understand the technology which is confining ethics into a boutique ethic; e.g. “AI”. (See 2022 UNESCO report on AI curriculum in K-12). I am learning to disagree .

I am not a bystander, passively looking on, and onto my belly button alone. Opening acceptance to Noddings’ thought on care (1995, 187) : “a carer returns to the cared-for,” when in the most difficult situations principles fail us (Rossman & Rallis 2010). How are we caring for those affected by the throwing around of the label “AI” (as a hype or as a scarecrow)?

Simultaneously, how are we caring for those affected by the siphoning off of their data, for application, unknown to the affected, of data derived from them and processed in opaque and ambiguous processes? (One could, as one of the many anecdotes, summon up the polemics surrounding DuckduckGo and Microsoft, or Target and baby product coupons, and so on)

And yet, let us expand back to ethics surrounding the boutiqueness of it: the moment I label myself (or another such as the humans behind DuckDuckGo) as “stupid”, “monster”, “trash”, “inferior”, ”weird”, “abnormal;” “you go to hell” or other more colorful itemizations, is the moment my (self-)care evaporates and my ethical compass moves away from the “...unconditional worth of all human beings and the equal respect to which they are entitled” (Rossman & Rallis 2010). Can then a mantra come to the aid: ”carer, return to the cared-for”? I want to say: “yes”.

Though, what is the impact of the mantra if the other does not apply this mantra (i.e., DuckDuckGo and Microsoft)? And yet, I do not want to get into a yoyo “spiel” of:
Speaker 1:“you first”,
Speaker 2: “no, you first”,
Speaker 1: “no, really, you first”.
Here a mantra of: “lead by example, and do not throw the first or n-ed stone” might be applicable? Is this then implying self-censorship and laissez-faire? No.

I can point at DuckDuckGo and Microsoft as an anecdote, and I think I can learn via ethics, into boutique ethics, what this could mean through various (ethical and other) lenses (to me, to others, to them, to it) while respecting the act of the carer. Through that lens I might wonder what drove these businesses to this condition and use that as a next steppingstone in a learning process. This thinking would take me out of the boutique and into the larger market, and even the larger human community.

The latter is what I base on what some refer to as the “ethic of individual rights and responsibilities” (Ibid). It is my responsibility to learn and ask and wonder. Then I assume that, the action by an individual who has following been debased by a label I were to throw at them (including myself), as those offered in the preceding sentence, is then judged by the “respect to which they are entitled” (Ibid). This is then a principle assuming that “universal standards exist” (Ibid). And yet, on a daily basis, especially on communal days, and that throughout history: I hurdle. After all we can then play with words “what is respect and what type of respect are they indeed entitled to?”

I want to aim for a starting point of an “unconditional” respect, however naive that might seem and however meta-Jesus-esque or Ghandi-esque, Dr. King-esque, or Mandela-esque that would require me to become. Might this perhaps be a left libertarian stance? Can I “respectfully” throw the first stone? Or lies the eruption in the metaphorical of “throwing a stone” rather than the physical?

Perhaps there are non-violent responses that are proportional to the infraction. This might come in handy. I can decide no longer to use DuckDuckGo. However, can I decouple from Microsoft without decoupling from my colleagues, family, community? Herein the learning as activism might then be found in looking and promoting alternatives toward a technological ecosystem of diversity with transparency, robustness and explainability and fair interoperability.

Am I a means to their end?” I might ask then “or am I an end in myself?” This then brings me back to the roles of carer. Are, in this one anecdotal reference, DuckDuckGo and Microsoft truly caring about its users or rather about other stakeholders? Through a capitalist lens one might be inclined to answer and be done with it. However, I prefer to keep an openness for the future, to keep on learning and considering additional diversifying scenarios and acts that could lead to equity to more than the happy few.

Through a lens of thinking about consequences of my actions (which is said to be an opposing ethical stance compared to the above), I sense the outcome of my hurdling is not desirable. However, the introduction of alternatives or methods toward understanding of potentials (without imposing) might be. I do not desire to dismiss others (e.g., cast them out, see them punished, blatantly ignore them with the veil of silenced monologue). At times, I too believe that the act of using a label is not inherently right or wrong. So I hurdle, ignorant of the consequence to the other, their contexts, their constraints, their conditions and ignorant of the cultural vibe or relationships I am then creating. Yes, decomposing a relationship is creating a fragmented composition as much as non-dialog is dialog by absence. What would be my purpose? It’s a rhetorical question, I can guess.

I am able to consider some of the consequence to others (including myself), though not all. Hence, I want to become (more) caring. The ethical dichotomy between thinking about universals or consequence is decisive in the forming of the boutique ethic. Then again, perhaps these seemingly opposing ethics are falsely positioned in an artificial dichotomy. I tend to intuit so. The holding of opposing thought and dissonance is a harmony that simply asks a bit more effort that, to me, is embalmed ever so slightly by the processes of rhizomatic multidimensional learning.

This is why I want to consider boutique ethics while still struggling with being ignorant, yet learning, about types and wicket conundrums in ethics , at larger, conflicting and more convoluted scales. So too when considering a technology I am affected by yet ignorant of.

References

Gretchen B. R., Sharon F. R. (2010). Everyday ethics: reflections on practice, International Journal of Qualitative Studies in Education, 23:4, 379-391

Noddings, N. (1984). Caring: A feminine approach to ethics and moral education. Berkeley, CA: University of California Press.

Rawls, J. (1971). A theory of justice. Cambridge, MA: Harvard University Press.

Rossman, G.B., S.F. Rallis. (1998). Learning in the field: An introduction to qualitative research. Thousand Oaks, CA: Sage.

Rossman, G.B., S.F. Rallis. (2003). Learning in the field: An introduction to qualitative research. 2nd ed. Thousand Oaks, CA: Sage.

UNESCO. (2022). K-12 AI curricula-Mapping of government-endorsed AI curriculum.

<< Critique: not as a Worry nor Dismissal, but as Co-creative Collective Path-maker>>


In exploring this statement, I wish to take the opportunity to focus on, extrapolate and perhaps contextualize the word “worry” a bit here.

I sense “worry” touches on an important human process of urgency.

What if… we were to consider who might/could be “worried”, and, when “worry” is confused or used as a distracting label. Could this give any interesting insight into our human mental models and processes (not of those who do the worrying but rather of those using the label)?

The term might be unwittingly resulting as if a tool for confusion or distraction (or hype). I think to notice that “worry,” “opposition,” “reflection,” “anxiety” and “critical thought-exercises,” or “marketing rhetorics toward product promotion,” are too easily confused. [Some examples of convoluted confusions might be (indirectly) hinted at in this article: here —OR— here ]

To me, at least, these above listed “x”-terms, are not experienced as equatable, just yet.

As a species, within which a set of humans claims to be attracted to innovation, we might want to innovate (on) not only externals, or symptoms, but also causes, or inherent attributes to the human interpretational processes and the ability to apply nuances therewith, eg, is something “worrying” or is it not (only) “worrying” and perhaps something else / additional that takes higher urgency and/or importance?

I imagine that in learning these distinctions, we might actually “innovate”.

Engaging in a thought-exercise is an exercise toward an increase of considering altered, alternative or nuanced potential human pathways, towards future action and outcomes, as if exploring locational potentials: “there-1” rather then “there-2” or “there-n;” and that rather than an invitation for another to utter: “don’t worry.”

If so, critical thought might not need to be a subscription to “worry” nor the “dismissal” of 1 scenario, 1 technology, 1 process, 1 ideology, etc, over the other [*1]

Then again, from a user’s point of view, I dare venture that the use of the word “worry” (as in “I worry that…”) might not necessarily be a measurable representation of any “actual” state of one’s psychology. That is, an observable behavior or interpreted (existence of an) emotion has been said to be no guaranteed representation of the mental models or processes of they who are observed (as worrying). [a hint is offered here —OR— here ]

Hence, “worry” could be / is at times seemingly used as a rhetorical tool from either the toolboxes of ethos, pathos or logos, and not as an externalization of one’s actual emotional state of that ephemeral moment.

footnote
—-•
[*1]

Herein, in these distinctions, just perhaps, might lie a practical excercise of “democracy”.

If critical thought, rhetoric, anxiety, opposition are piled and ambiguously mixed together, then one might be inclined to self-censor due to the mere sense of overwhelming confusion of not being sure to be perceived as dealing with one over, or instead of, the other.