Table of Contents
The First Set of Limitation-rhetoric.
The Second Set of Limitation-rhetoric.
The Third Set of Limitation-rhetoric.
The Fourth Set of Limitation-rhetoric.
The Fifth Set of Limitation-rhetoric.
The Sixth Set of Limitation-rhetoric.
Bringing the Sets of Limitation-rhetoric Together.
Why the first set should not be underestimated as potentially troubling.
Epilogue | Asimov: Ever So Slightly Closer to I, Robot.
Prologue & Introduction
‘Limitations are the mother of invention, or is it: Invention is the mother of limitations?’ Somehow these syntactic options remind me of Zappa rather than Kranzberg. And yet:
- “Technology is neither good nor bad; nor is it neutral.” (Kranzberg 1995: 6)
- “Invention is the mother of necessity” (Kranzberg 1995: 7)
- “Technology comes in packages, big and small” (Kranzberg 1995: 7)
- “Although technology might be a prime element in many public issues, nontechnical factors take precedence in technology-policy decisions” (Kranzberg 1995: 8)
- “All history is relevant, but the history of technology is the most relevant.” (Kranzberg 1995: 9)
- “Technology is a very human activity – and so is the history of technology” (Kranzberg 1995: 11)
In the realm of automation that is claimed to come with degrees of smartness, we can here call for Asimov’s Three. Besides his obvious three laws of ethical robots, some of us might feel urged to also regurgitate Kranzberg’s 6 propositions, here above. Our techno-recitals are at times performed as if these were biblical mantras.
For those wondering, they are not and perhaps we should not blindly enact this ritual of rote-culturalization either. For some hint on this latter creation of doubt, in context of our present technological developments, a handpicked collection of quotes is offered here below and from the source of Asimov’s Three (and then some). The latter work, I, Robot, suggests how “limitation” is a flued concept in the hands of human actors.
‘Fluidity’ is itself neither good, nor bad, nor neutral. this might tend to be the case for many concepts while for ‘technology’ we tend to forget and could realize effect due to scaling as an added issue, requesting us to tread with care to labeling it as singularly neutral, bad or good. In our markets and pop cultures we might tend to overlook this or keep this muted. In a process of becoming aware thereof, lies one little example of a human opportunity to become a human innovation as human.
These and other “laws” create and offer limitation to this innovation of humanness. the act of innovation does not just lie with lithium-hungry items. And these are claimed to make explicit limitations inherent to one or other system. So far so good with the obviousness.
When looking through a socio-technological lens, it seems that a few voices revisit a need to point out our human limitations rather than (also) pointing out the limitations of some human output: the art, the artifact, the artifice and the artificial. Simultaneously, some seem to only point at techno-centric solutions and seem to have forgone hope for any human-centric and human-relational set of approaches. ‘Approaches’ do not only have to be “solutions” to non-existing or existing “problems.” Thirdly, some go as far as reducing humans as mainly machine-like. For the latter, one can see Asimov’s quotes here below, or any mechanomorphic metaphors. Especially those analogies toward how our brain is claimed to function. Penrose and Hameroff, for instance, might argue against this. (Ekert 1998)
The techno-centric approach (that at times also debases hope for and with the human) is creamed-up to be an Icarus-like pinnacle of enriching human accomplishment. If a reader senses a paradox here, please do sense it. It is there.
While the human, or for some the entirety of humanity, is grabbed into a fallacy of epistemic proportions (i.e., Leibniz), there are, spread out, false dichotomies as well.
The two, the human and the human output, are tautologically intertwined. In common human storytelling, these two are as characters swapping places.
At times, one is narrated as causality, claimed cause, or as effect of the other. Causality is very confusing for many of us, and me included. For instance, too easily I assign causality where there might not be one. Secondly, it has been said that some form of causality is ignored and thought of as purely correlation (where it is not correlation).
Linear, reductionist and (beautifully-humanly) flawed characters these two are made into when considering ‘limitation.’ Each human, and her accomplishments, are quickly deconstructed by other humans to present the collective of both, with cultural claims and scientific “causal” “facts” of ‘limitation.’
What are these limitations?
Through the narratives surrounding our present-day technologies, and claimed effects on society, I started, still frivolously, to organize sets of ‘limitation.’ I organized these as I, for now, think to perceive and interpreted them through a biased lens of opportunity and innovation.
Perhaps you too find it of use.
Following a fishing for limitation-rhetoric, six variations on the theme have for now been listed here.
To some degree, I subscribe to some, while others I find questionable and that since these seem to hint toward allowing increased risk, this too as a ‘limitation,’ toward a human’s and life’s well-being.
Each category of this limitation-rhetoric uses its own memes, one-liners and narratives that run the risk toward demagoguery. In this exercise, I did not catalogue these. This is while some set(s) do(es) represent more rational and fair reasoning on urgent issues and limitations of humans and their imposing output. That is, too many individuals and communities have been negatively experiencing limitations for far too long and at times with far too high an intensity and far too large a negative set of consequences (e.g.: abuse, death).
The First Set of Limitation-rhetoric.
some voices have claimed or evangelized that the unhinged or undesirable attributes observed in the output of some of our latest ChatBots unveil the limitations of all of humanity. These hyperbolic voices at times (not all), seem to have a motivation to distract away from the limitations of their own design work, or of engineering methodologies, as these are constraint by business vectors or competitive acceleration risks. These are as, with a broad brushstroke, how human endeavors tend be. Some use their claim of the oracle-status of this one technology, exhibiting their slice of delusions of grandeur. Since they too are human therein might seem to lie some odd tension. Imagine this to be a Stockhausen orchestral and choral piece and it might make sense. For instance, imagine a character claiming: “see what technique we have developed, following an allocation of vast resources. It enables us to showcase humanity’s fragility, in its entirety.” Or imagine such voice as an oracle-imbued apology of: “it’s not us, we did not create this. It’s them, the representatives of the weak entirety of humanity.”
The Second Set of Limitation-rhetoric.
other voices add on to the first set in that not only output is telling, and input reveals human limitation. This is observed when humans try and jailbreak this one technological system of Chatbots, by prompt injections that are, by some, labeled as singularly hostile. This is then for some (not all) uttered as a distraction. Such narrative seems to aim at steering us (i.e., humanity) away from the potentially collective act of testing the limits for the purpose of engineering, scientific, legal, and social processes. Not to mention the aim of testing human play, liberty and becoming. Each come with their own methods of due diligence. Each offer a testing of robustness, viability, credibility, replicability, measurability and standard or cultural conformance. Each provide opportunity and pressures toward exploration of innovations thereof. Innovation is not only the digital artifact as a static object for reverence through obediently-singular usage, with ‘usage’ as if solely the prescribed application. (enrich: omni-usage)
The Third Set of Limitation-rhetoric.
Then a set includes humans who might identify the limitations of humans in a liminal space. This occurs in between the input and output of the given tech, e.g., a Large Language Model. The blackbox issues would be one set that is implying human shortcomings with transparency. Then again, some others argue lack of transparency is not always a limitation, and rather a design feature. Some vectors in the set of transparency enable explainability. Other processes and attributes will be just a bit too revealing if for instance open access, for capitalist or security reasons, is not desired. Possibly other arguments against maximum transparency could be (individual) privacy, dignity, integrity, and respect. Through the lens of desirability and limitation, there seem to possibly be two mythological yet human subsets here: those who like to exhibit, and those who like to mystify. Yet, they are united by those who’d like to take a peep.
The Fourth Set of Limitation-rhetoric.
Fourthly, limitations are identified contextually (and “external,” though, not really if context implies an inter-related process with that what is contextualized and the contexts): carbon footprint of server farms that run these models, and the GPU power needed, or the cost of several hundreds of thousands of dollars per day to feed and run some hungry model and the access to its usage in server farms hidden from humans who have similar yet perhaps more urgent or humane need for basic resources.
The Fifth Set of Limitation-rhetoric.
Then there is the calling-out of subtext and the inherent and at times baked-in systemic issues of training data. Analysis lays bare the historic, as well as socially discriminatory biases. Some counter voices then try to muddle that discrimination and bias are needed in statistics. Surely, and yet…
Moreover, some human (and even some artificial) voices identify systemic bias beyond the data, as these are found in the Caucasian and Anglo-Saxon “bro”-culture or (post-)colonialist justification and x-washing processes. This is then in addition to the calls to reconsider outsourcing of “automation” and “AI” processing to menial human workers. These humans are being hidden behind a Wizard of Oz’s curtain and that often into territories that offer more laissez-faire toward a sustaining and increasing of techno-feudalism. Counter voices, whom are at times found in the following sixth category, might claim a bias here that is too (reactionary and) anthropocentric as opposed to an openness to an evolutionary going beyond our present (human) forms and functions. The question put forward here is not about a back (i.e., a romanticizing of an imagined past), or forth (i.e., an techno-centric innovation through a romanticized lens of a future). It might rather be a question of: who’s form and who’s function, and serving who, at the cost of who (yet again)?
The Sixth Set of Limitation-rhetoric.
Sixthly, we can identify the voices pointing out the limitations that doom and far future needs offer. These voices are seemingly intertwined with effectiveness and claimed altruistic narratives. Some countering pushback highlights that these limitations are pointed out within this sixth set while within this set there seems a fervent denying of any romancing toward eugenics of various nuance, flavor or iteration.
Bringing the Sets of Limitation-rhetoric Together.
The fifth and this sixth set of identifying voices toward the limitations of humanity (as a whole or in part of those with resources, power and control) seem to be at odds with each other. This is while the sixth and the first seem at times to act as drinking buddies and party wingmen. This in turn seems to occur while conveniently monotonizing the sexes and genders into one which is then obscured via exnomination à la Barthes.
Could generative “AI”-technologies, of which I ignorantly think ChatBots to be one manifestation, be as a cultural form of cosmetic surgery which itself might be felt as a cultural form of medical procedures? It statistically polishes a historic style into a homogeneous architecting that seems more human than human. This tendency brings together five and six into the human tragedy of never-good-enough confused with relation, interaction (into the worlds), creativity, wonder, wandering, innovation and exploration.
This also hints at eugenics, or at a mass-debasement of life’s forms and functions. Such aspirations for eradication claim this or that form or function as preferred over others, while at an earlier or later stage in human history the same item was or could be revered and preferred as superior. A total collection of ideal human aspirations has not been and probably shall not be universally fixed (unless some extreme, hyperbolic dystopian form of architecturally, technologically policed collectivism is all-inclusively imagined). I think to note that the process of eagerly desired yet lazy type of imagination plays a serious role in the creation of limitation-rhetoric.
Some human voices hint at eradicating less desirable traits (with technology). These voices are not necessarily the same voices as those who claim to see the unveiling of human limitation through the output of some technology. Interestingly, both voices might be interpreted as echoing opposition to degeneration (“Entartete” in German) as an imposed limiting label on the arts and entartete humans. Our sadistic or masochistic traits come in extreme different weights. Sure, these are not necessarily always acted upon by all, and all at once, and that with the same highest setting of intensities as if a Spinal Tap’s 11 setting.
Why the first set should not be underestimated as potentially troubling.
The first category of limitation-rhetoric, as offered here above, I wish to highlight further. It is one often heralded by those who work in, or for, the eco-system that created the technologies to begin with. (Disclaimer: I am working with and in technologies that are open to applying these).
Are we, as individuals in humanity, as a species with its many nuances, certain we prefer our engineers to tell us how they have unveiled human nature via their own shortcomings in their designs that were made under understandable financial stresses and needs for return on investments?
Engineers, as much as many a professional label, might gather a human sample that might lack representation compared to the entire population. We want it differently. The “to want” is not yet the “to be.” For the time being the ‘engineer’ is a rather homogeneously forces set compared to what it could be and compared to the larger population.
In addition –when considering the scientific method Rogers warns us humans of– some additional attributes that make studying a technological model, or in extension humanity through some type of LLMs, a precarious business: “as far as research and scientific publications are concerned, the ‘closed’ models… cannot be meaningfully studied, and they should not become a ‘universal baseline’” (Rogers 2023)
Surely then popular technological (or scientific) models should not function as a “universal baseline” to judge the universal shortcomings of the entirety of humanity. Is herein a hint of a non-technological misalignment issue of engineering in service of the human, the human community and of humanity? (We could refer to Asimov’s First Law and its alterations throughout the story of “I, Robot“)
To state that a technology helps us (serendipitously) unveiling the questionable limitations of humanity, might be giving too much credit to one human technological output and little credit to the observations in other fields, that have been explored for centuries, if not longer. Secondly, it is a sentiment blatantly filled to the rim with epistemic fallacy: a language model, no matter how large, as scraped of a part of the internet does not represent the entire output of the entire species.
While an astronomic number of individuals spend time online (online while using one language), they are not all individuals, nor all symbolic artifacts, symbolic processes or the larger semiotics which humanity (and beyond!) intentionally and unintentionally has to offer. Human (and other) expression has not been statistically catalogued in one language model. It surely has not catalogued the unexpressed or unrecognized or unarchived. There is so much more to take note of than the diffused, statistical, structured reshuffling of today’s alter piece in our techno temple. Move away from the screen (you can go back) and let the worlds engulf you.
That act of equating the sample and the model with the entirety of the human population, indeed could show a limit, (not “the”). For instance: of humanity and our tendency toward an epistemic fallacy. That is the tendency of each of us (including this author) to dichotomize and to equate the reductionist attributes of any model with the entirety of reality. Leibniz might not be happy, folks. But, yeah: “who cares –while claiming importance of philosophy, humanity and the arts– as kings of the techno and finance courts we can treat them as jesters and trivial entertainment and ignore their reflections and the acts that they have nurtured,” I imagine a hardened voice to reply.
If one ignores voices (as well as this text likely might) that have expressed limitations which were expressed outside of one’s own preferred bubble(s) or zones of comfort. If one then claims that the voices in one’s own bubble have suddenly unveiled a wisdom (which is already found elsewhere and far more nuanced). Then, a conclusion, made with a largesse of “humanity,” constrained into the astronomical smallness of a super large language model, might not necessarily be wise.
By the way, ‘large’ is only comparatively large to any acknowledged model (by excluding those that are being ignored, if any) we humans have identified before. Such act itself is indeed human and indeed limiting (as is the nature of a model). Such attribute (e.g., the one these voices of the first category offer) and which unveil vectors within our limitations, might be and are insightful. They might, and some are, telling. Yet they might, and are, not all-inclusive nor nuanced.
This write-up is neither sufficient, yet it tries to remind us to keep dialog and allocation of resources or labels of limitation an open debate and one that should not only cater to those who have already been catered to.
Limitation can be about potential and opportunity and innovation. The opportunity to set experienced wrongs in some (not so small) communities and demographics, right. I suggest we embrace this and the other limitations (perceived, actual, dominating or outlying). ‘Embrace’ here is an act of modest and caring acknowledgement of living existence. This innovative embrace could make for richer action: relation, reflection, intention, debate, trial and error as unrecognized serendipity to creative continuing life-centered innovation.
Epilogue | Asimov: Ever So Slightly Closer to I, Robot.
The First Law: “…it is impossible for a robot to harm a human being…” OR “…a robot may not injure a human being or, through inaction, allow him to come to harm.” OR “…an unbreakable First Law—which makes it impossible for them [robots] to harm human beings under any circumstance.” OR “the First Law of human safety”
The Modified First Law: “”…we either had to do without robots, or do something about the First Law—and we made our choice… “…we had to have robots of a different nature. So just a few of the … models… were prepared with a modified First Law. To keep it quiet, all … are manufactured without serial numbers; modified members are delivered here along with a group of normal robots; and, of course, all our kind are under the strictest impressionment never to tell of their modification to unauthorized personnel.”
OR “”…the Machines work not for any single human being, but for all humanity, so that the First Law becomes: ‘No Machine may harm humanity; or, through inaction, allow humanity to come to harm.’…”
The Non-First Law Robot “Law”: “…“Dr. Calvin, we don’t dare let that ship leave. If the existence of non-First Law robots becomes general knowledge—”… “…it was found possible to remove the First Law.”… But the Law, I repeat and repeat, has not been removed, merely modified.”… What was left of the First Law was still holding him back.” …”I mean there is one time when a robot may strike a human being without breaking the First Law. Just one time.” “And when is that?” Dr. Calvin was at the door. She said quietly, “When the human to be struck is merely another robot.””
The Second Law: “a robot must obey the orders given it by human beings except where such orders would conflict with the First Law.” OR “obedience”
The Third Law: “a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”” OR “self-preservation”
The “fourth” law: The first law comes first, dominating the second and third law. Then “the Second Law of obedience is superior to the Third Law of self-preservation.” [unless, we decide to modify the first law then, well then the sky is the limit. ]
Some References
Asimov, Isaac (1950, 2004). I, Robot. IN: The Robot Series. New York, NY: Bantam Spectra Books Random House). https://archive.org/details/irobotnovel
Buolamwini, J., & Gebru, T. (2018).Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR 81:77-91, 2018. IN:Proceedings of Machine Learning Research (PMLR), 81:77-91. ML Research Press. Last retrieved on 20APril, 2023 from https://proceedings.mlr.press/v81/buolamwini18a
Ekert, A., R. Jozsa, R. Penrose, and Hameroff Stuart. (1998). Quantum Computation in Brain Microtubules? The Penrose–Hameroff ‘Orch OR‘ Model of Consciousness. IN:Philosophical Transactions of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences 356, no. 1743 (August 15, 1998): 1869–96. https://doi.org/10.1098/rsta.1998.0254.
Federal Trade Commission, USA (FTC). (2023, April 23). FTC Chair Khan and Officials from DOJ, CFPB and EEOC Release Joint Statement on AI. Online: etc.gov. Last retrieved on 26 April, 2023 from https://www.ftc.gov/news-events/news/press-releases/2023/04/ftc-chair-khan-officials-doj-cfpb-eeoc-release-joint-statement-ai
Kranzberg, M. (1995). Technology and History: “Kranzberg’s Laws.” Bull. Sci. Tech. Soc., Vol. 15, No. 1, pp. 5-13, 1995. 0270-4676/95. STS Press
Leibniz, G. W. ( ). the indiscernibility of identicals. IN: Discourse on Metaphysics, Section 9. IN: Loemker, L. (ed. and trans.). (1989). Leibniz, G. W., Philosophical Papers and Letters. Springer Netherlands. P 151, 187, 225-226, 231-232. (If conveniently ignoring Max Black on two distinct universes that have location as the one deviating attribute while having all other properties to be identical. Here one could ignorantly argue that they are not identical and that their context, over time, would gradually deviate some attributes away from being identical)
Mathjis, E., Mendik, X. (2011). 100 Cult Films (Screen Guides). Palgrave Macmillan. p 177.
Rogers, A. (April 25, 2023). Closed AI Models Make Bad Baselines. IN: Towards Data Science. (Blog). Online: medium,com. Last retrieved 26 April, 2023 from https://towardsdatascience.com/closed-ai-models-make-bad-baselines-4bf6e47c9e6a?gi=65b7adc7d2ff
Zappa, F. (1974. ) Don’t Eat the Yellow Snow. IN: Apostrophe (‘). Frank Zappa (minus The Mothers of Invention). A popular song about a mother, a son and snow of questionable substance.
This text elsewhere:
https://medium.com/@vjgvxhg/techno-limitation-rhetoric-as-human-opportunities-f8a64b6c4333