Tag Archives: aiethics

Accountable. Accountability. Accountability (GDPR).


While sharing some attributes, ‘accountability’ is not to be confused with ‘responsibility.’

Accountability’ is an allocation of measurement or evaluation (of blame or award)  after a given event, as its outcome is measured or perceived.Following the finalization or interruption of processes that created an event and its results, an individual is held accountable. One then has an obligation to report, to explain, or to justify the effect, the outcome and how these affect or impact.    Accountability relates to one’s commitment, to one’s response and to one taking ownership, with clarity, of the output or result of a given process and its (undesirable or desirable) consequences. It relates to the goodness of the result, and of its consequences. Often accountability is allocated to a single individual (if not, a blame-game could follow). One has accountability, and one is held accountable. An accountable individual, or organzation, is one that is transparent about its decision-making processes, and is willing to explain and justify its actions to others. The measurement of accountability can be done by oversight, by investigating compliance, by analysis of reporting, and by allowing enforcement of reprimands, sanctions or legal steps where judged necessary.

One could distinguish that having the ownership over a task, that must be done, is ‘responsibility.’ Responsibility implies duty of one, or more than one individual, as a team. It relates to the rightness of taking action in completing a task.  One takes responsibility, and one is responsible for doing a task.

Accountability “implies an ethical, moral, or other expectation (e.g., as set out in management practices or codes of conduct) that guides individuals’ or organisations’ actions or conduct and allows them to explain reasons for which decisions and actions were taken. In the case of a negative outcome, it also implies taking action to ensure a better outcome in the future…  In this context, “accountability” refers to the expectation that organisations or individuals will ensure the proper functioning, throughout their lifecycle, of the AI systems that they design, develop, operate or deploy, in accordance with their roles and applicable regulatory frameworks, and for demonstrating this through their actions and decision-making process (for example, by providing documentation on key decisions throughout the AI system lifecycle or conducting or allowing auditing where justified.” (OECD)

References

https://oecd.ai/en/dashboards/ai-principles/P9

European Data Protection Supervisor (EDPS). (n.d.). Accountability. Online. Last retrieved on April, 10 2023 fromhttps://edps.europa.eu/data-protection/our-work/subjects/accountability_en

Pentland, Alex, and Thomas Hardjono. “10. Towards an Ecosystem of Trusted Data and AI.” Works in Progress, n.d., 13. Last retrieved on 26 July 2022 from https://assets.pubpub.org/72uday26/19e47ad0-9cae-4dbf-b2cb-b38cd38d9434.pdf

Access. Accessible. Accessibility. Right of Access (GDPR).


Through a technological lens, mapped with efficiency and with AI, ‘accessible’ could refer to the ease with which data, applications, and services can be accessed and used by machines, without human intervention. This could imply the absence of a ‘human-in-the-loop.’

Such a system is one that could be optimized for efficiency and that could perform tasks quickly, accurately, and reliably.

From an interface design, mapped with consequentialist ethical perspectives, an accessible AI system could suggest that users, with empowering considerations of their abilities, vulnerabilities or disabilities, could access and use the system with ease or with means nuanced to their specific needs.

It could also refer to the degree to which a product, service, or technology is available, affordable, and designed to meet the needs of all individuals, including those from marginalized or otherwise disenfranchised  communities.

Degrees of accessibility implies that access could not be or be less constraint due to demographics, background, abilities, or socioeconomic status. This definition of accessibility could imply some of the following concepts which could improve due to accessibility, and that to some degree: agency, autonomy, plurality, diversity and diversification, equity, personalization, inclusivity, fairness, mindfulness, and compassion. Through such perspective this could be considered a ‘good’ system design. This could then lead one to consider concepts such as ‘ethical-by-design,’

An accessible AI  system could then also be one that is transparent (the lack of transparency implies a lack of access, even if it is access to the possibility of understanding the inner workings of the AI system), and thus of concepts such as, explainable, and accountable, ensuring that the decisions made by the AI system are fair, unbiased, and aligned with ethical principles.

The Right of Access (GDPR)’ is one of the 8 rights of the individual user (also referred to as “data subjects”) as defined within the European Union’s General Data Protection Regulation (GDPR). It is article 15 in the GDPR: “The data subject shall have the right to obtain from the controller confirmation as to whether or not personal data concerning him or her are being processed, and, where that is the case, access to the personal data…” and access to a number of categories of information as further defined in this article.

This policy item aims “to empower individuals and give them control over their personal data.” The 8 rights are “the right of access, the right to rectification, the right to erasure, the right to restrict processing, the right to data portability, the right to object and the right not to be subject to a decision based solely on automated processing.

References

European Data Protection Supervisor. ( ). Rights of the Individual. Online: (an official EU website). Last retrieved on April 10, 2023 fromhttps://edps.europa.eu/data-protection/our-work/subjects/rights-individual_en

Art. 15 GDPR Right of access by the data subject: https://gdpr-info.eu/art-15-gdpr/

Page, Matthew J, David Moher, Patrick M Bossuyt, Isabelle Boutron, Tammy C Hoffmann, Cynthia D Mulrow, Larissa Shamseer, et al. “PRISMA 2020 Explanation and Elaboration: Updated Guidance and Exemplars for Reporting Systematic Reviews.” BMJ, March 29, 2021, n160. https://doi.org/10.1136/bmj.n160

https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/?template=pdf&patch=17#

https://ethics-of-ai.mooc.fi/chapter-5/3-examples-of-human-rights

<< A Language of Techno Democratization >>


“What would be ‘democratization of a technology,’ if it were to come at the cost of a subset of the population?”

The above is structured as a second conditional.

And yet, an “innovative” grammatical invitation could be one where it is implied one is at all times free to test whether the attributes of the second conditional could yield some refreshing thought (for oneself) when substituting its “would” away from the hypothetical and for “is to,” and “if it were” for “when it is.” In effect, if one were not, one might (not be) wonder(ing) about one’s creative or imaginative freedom.

What is to be ‘democratization of a technology’  when it is to come at the cost of a subset of the population?


At times I enjoy seeing grammar and syntax as living entities that offer proverbial brushes and palettes of some iterative flexibility and to some fluid extent. Not too much, nor at all times, yet not rigidly absent either. 

However, more so, I’d like to consider them/they, which a sentence’s iterations trigger me to think of. I want to consider some of their plight. When I’m more narcissistic I might do so less. When I wonder about my own subsistence (especially when I am sofa-comfortable) I might so less. Then there is that question, lingering, how are they faring, and there is that question as to how is my immediate (non)act, or (long-term) vision, affecting them?  What do they themselves have to say about x?

Grammar and syntax then become, to me, teleportation engines into the extended re-cognition of me, myself and I, relationally with others. It might be compassion. It might be empathy. It might unveil the insufficient probability thereof. It might highlight the socially acceptable, self-indulgent, self-commendation checkbox-listing. It might be an echo of some non-computable entanglement. It might also be my poetic pathos in dance with my imagination. It is grammar and syntax, and then some.

I love languages and their systems, almost as much as I love life and its many subsystems. Does this mechanized word choice, i.e., “subsystem,” disassociate a care for the other, away from that other? It does not have to. And yet, it might suggest yet another attribute, adding to a perceived increased risk of dissociation between you and I. Entangled, and yet in solitude (not to be confused with “loneliness”). 

Note, I do not confuse this ode to language and to the other, with implying an absence of my ignorance of the many changing and remaining complexities in language and in (the other’s) physically being with the worlds. There is no such absence at all. I know, I am ignorant. 

The above two versions of the question might read as loaded or weighted. Yes. …Obviously? 

““What ____ ‘democratization of a technology’  ______ come at the cost of a subset of the population?

The above two, and their (almost/seeming) infinite iterations, allow me a telepresence into an imaginary multiverse. While this suggests a pluralism, it does not imply a relativism; to me. 

And yes, it is understandable, when the sentence is read through the alert system of red flags, klaxons and resentment: it will trigger just that: heightened alertness, de-focusing noise and debasing opposition. Ideological and political tones are possibly inevitable. These interpreted inevitabilities are not limited to “could” or “would” or “is” and its infinitive “to be” alone.

It could be (/ would be / is) ideological (not) to deny that other implications are at play there. “subset” is one. “population” is another. Their combination implies a weighing of sprinkles of scientific-like lingo. Then there is the qualitative approach versus the lack of the quantitative. In effect, is this writing a Discourse Analysis in (not so much) hiding? 

This is while both the quantitative and qualitative approaches are ((not always) accepted as) validating (scientific) approaches. I perceive these as false dichotomies. Perhaps we engage in this segregation. Perhaps we engage then into the bringing together again, into proverbial rooster-fighting settings. Possibly we do so, so that one can feel justified to ignore various methods of analysis, in favor of betting on others or another. Or, perhaps, in fear of being overwhelmed.

Favoritism is a manner to police how we construct our lenses on relational reality; i.e., there’s a serious difference between favoring “friendliness” vs “friend.” This creates a piecemeal modeling without much further association and relating into the world and with other makers of worlds. This is especially toward they who have been muzzled or muted far too long and far too disproportionately, rather then toward they who feel so yet, who might have little historic or systemic arguments to claims.

Whether the set of iterations of this sentence, inevitably has to be (partly) party-political is another question. Wether a (second conditional) sentence could be read as an invitation toward an innovation, is up to you, really. It is to me. To me it brings rhizomic dimensions into a hierarchical power struggle.

And yes, returning to the sentence, arguably “democratization” could be substituted to read “imposition” or another probabilistically-viable or a more surreal substitute.  

A sentence as the one engineered for this write-up, invites relationship. Whether we collectively and individually construct the form and function of our individual “self,” our individual relationships, and these then extended, extrapolated and delegated as re-cognitions,  into small, medium, large or perceived as oversized processes, is one up for debate. To me they’re weighted in some directions, not irrelevant here to more explicitly identify these. I tend to put more weight on the first and surely the second while not excluding the third when considering the systemic issues, the urgently needed, and then thirdly, the hypothetically desirable.

Though as I am writing this, one might interpret my stance more weighted in one direction versus another. Neither here, I shall not yet indulge an explicit confirmation. After all, there are both the contexts and subtexts. Why am I writing about this in this way, here and now? Why am I not mentioning other grammatical attributes or syntactical attributes? Why “technology,” and why not “daffodils”? What of using or substituting articles (e.g., “a,” “the”)? What else have I written elsewhere at other times? Who seems to endorse and (what seems to) enable my writing? What if I were to ask myself these questions and tried to furbish the sentence to satisfy each possible answer?

A sentence “What ______ ‘democratization of a technology’  _____ to come at the cost of _________?”  could then be a bit like an antique chair: over time and across use, mending, refitting, refurbishing and appropriation.  And before we duly imagine it, having pulled all its initial nuances from its sockets, having substituted one for another probabilistic option within an imposed framework. Having substituted and then compounded all, we could collectively flatline our antique chair-like sentences to


“_______________________________” 


With this version of the sentence there is neither pluralism, nor relativism, and no need for any nihilism. It is a grammatical and syntactic mechanized absolute minimalism.

Then perhaps we could collectively delegate the triggering line to a statistical model of what we must know, what could, should, would and is: to (never) be. 

Welcome to the real. Enjoy your ________.  Welcome to the _________. Here’s a ________ to proudly tattoo on our left lower arm.

<< Context Ethics & Ethics of Complex Systems>>


Professor Bernd Carsten Stahl and co-authors recently published a paper where a panel was asked about AI Ethics. In the blog post on the paper’s topic the conclusion is telling: “…ethical issues are not immediate consequence of the technology but are caused by and located in the broader set of socio-technical systems that constitute and make use of AI. If this is true, then it will be important to think how we can move beyond the current state of the debate…” (Ref)

As the blog post accompanying the paper suggest as well, the present-day context that I too would like giving to this basic empirical study are the technological outputs, accelerated since the end of 2022. Intertwined with Generative AI technologies, are the technological services based on Large Language Models (LLM). ChatGPT, Bard, and others could come to mind.

Some of these LLM technological outputs are said to have been constrained in a metaphorical straitjacket: remember unhinged “DAN” and “Sydney”? Following creative prompting the systems seemed to be of questionable stability. Technological patches were suggested while others suggested to put the responsibility on the end-users with pathos and calls not to abuse our digital “companions.” The tools are continued to be fine-tuned and churned out while the foundations remain that of Machine Learning. It’s wild. It’s exciting. Yes. Or, so we might like to be enthralled. The ethical concerns sprout up as mushrooms on a wet autumn morning. And then there is Age of AI’s FreedomGPT; downloadable and claimed to be optimized to run on laptops and in private. The unhinged-ness does not only lie with the Garbage In Garbage Out (GIGO). It does not only lie with unfriendly users. It lies with the nature of the processes and architecture (i.e., Deep Learning and LLM). Moreover it lies with the operators of the engines; the engine’ers. Their ideological intentions input might be Freedom but the output is possibly an actual historic, systemic, power-grabbing Moloch in the here and the now, like Fascism: Freedom In Fascism Out. (Fromm 1942) Can we move beyond the debate? What are the data input of the debate, really?

Following unhinged characters, tamer bot-constructs are not contained into research, nor only for the pursuit of knowledge in an academic sense. The tech is not contained into a lab. It is out in the wild. The marketed tech comes with or without human actors, as cogs in the machinery of a Wizards of Oz-like social Experiments. (Ref) . Considering context (and techno-subtext): this group of humans are less excited and less enthralled by the innovative wild. They might know of some of the “technical characteristics” of their work yet they are not enabled to address the ethical constraints, which reach well beyond their focused-on tech.

The data collected in the paper suggest that AI (ethics) “experts” rank “interventions most highly that were of a general nature. This includes measures geared towards the creation of knowledge and raising of awareness.” (Stahl et al. 2013 via blog). It is a common and reasonable assumption that “once one has a good understanding of the technical characteristics, ethical concerns and mitigation options, one can prioritise practical steps to address the ethics of AI,” (Ibid). Associating this with the ranking by “experts,”(Ibid) perhaps it could be that once one has a good understanding of pluralist (and as such, a type of “general” knowledge base) ethical characteristics, of concepts, relations, concerns and mitigation options, then one could prioritize or demote one step to address an implementation in an AI application over another step. In effect, the view which many echo to first learn the tech and then perhaps the ethics, (UNESCO 2022:50, Groz et al 2019) and as Professor Stahl presents, cannot offer the rationale as to dismiss the other larger, contextualizing ethical literacy that is suggested here. Then again, Ethics not being subservient to technology is at present indeed not (yet) reality. Freedom is equated with technology. Freedom is not yet related for some strong voices to Ethics. I argue against this latter (while I argue against “morality”). As in some poetry, the form supports the liberation of the poetic function. One can think Sonnets or any other art forms where then aesthetic and ethics could meet. Neither in technology-to-market nor in education.

Reading the blog post further, it seems the research tends to agree to a larger more contextualized ethical thinking since only “in some cases it is no doubt true that specific characteristics lead to identifiable ethical problems, which may be addressed through targeted interventions. However, overall the AI ethics discourse does not seem to be captured well by this logic.” (Ibid) The text then continues with what I quoted here above: “ethical issues are not immediate consequence of the technology but are caused by and located in the broader set of socio-technical systems.

These AI-labeled technologies are contextualized and contextualizing. Metaphors of ink spreading out on paper might come to mind. Breathe, people. Critically assess, people. Or can we?

The above superficial suggestions could offer us some possible socio-technic contextualization to Prof. Stahl et al.’s publication.

Further context is one of investment figures. As shared by CB Insights funding in 2021-2022 for text-based Generative AI is claimed at 852M USD across 48 deals. That is while not elaborating on the investment in other Generative AI, with a similar amount. (Ref). Following the money (in non-linear fashions) offers a social debate in addition to debate through the techno-lenses. Via this context, related to present debates on speeding up and slowing down, we might want to try and see from and to where the money and R&D flows; both in tech and in academic research on the science and engineering as well as the ethics of these. Then the hype, seeming dichotomy or seeming contradiction in the present debate, might perhaps clear up a bit. (i.e., investors, stockholders and associated stakeholders are probably not often in the business of slowing down their own investments; perhaps those of competitors? Hence non-linearities, multidirectionality, seeming dichotomy, and seeming contradictions in PR acts vs investment, donations, public funding, or grant-offering acts?). It should be noted that the above excludes transparency on investment into hardware (companies) and their solutions. More importantly, these companies and there backers are then aiming to support rapid yet costly processes, and (rapid) Return on Investment. Muffled are then cost and cost to context (or environmental cost of which some is defined as carbon footprint). These too are downplayed or excluded, yet are the context of this debate.

Thirdly, the ethical concepts (from one lens of ethical thinking) of benign and malign too are contextualizing the AI ethics concepts into a larger than socio-technical sense alone. It is not only the tool itself that begs a human-in-the-loop. It is the human-in-some-loop according to Interpol that needs consideration as well. In taking us into the wider socio-technical realm, while reflecting via contextualization on the above academic publication, this too might be of interest. (Ref)

Fourthly, some wonder how we could put our trust in systems that are opaque and ambiguous in their information output. (Ref via Ref). As part of human nature, some seem to suggest putting blame with humans for a tech’s output of “confabulation,” while others oppose. Trust needs to be (adopted) in both the tech and perhaps even more so in the tools for thinking and debating the tech. At present the issue might be that the ethical concepts are piecemeal and feared to be looked at through a variety of ethically diverse yet systematized lenses. Various lenses can create a nuanced yet complex social landscape to think and debate socio-technical and technological attributes and concerns.

A set of questions might come to mind: Can they who are not (intending nor able to be) transparent, create transparency? While having the tech aligned with the human need (and human want), if there is no transparency, can there be proper debate? Probably, maybe, though… If one knows while the other is touching increasingly into the information-dark-ages, will there be equity in this debate? This is not merely a technical problem.

Via comparative contextualization: is transparency an accessibility issue, enabling one to see the necessary information under the metaphorical or physical hood? In analogy, is lifting the bonnet sufficient to see whether the battery of the car is damaged after a crash?

Reverting back to chat-bots: how are we the common people, insured for their failures, following a minor or less-minor information crash? Failures such as misinformation (or here), and that due to reasons that cannot be corroborated by them. The lack of corroboration could be due to lack of organized, open access, and lack of transparency into ethics or information literacy. Transparency is surely not intended as a privileged to a techno-savvy or academic few? Or, is it? These questions, and the contextualization too, hint of considerations beyond the socio-technical alone. These and better considerations too might offer directions to break open and innovate the debate itself.

Besides solutions through a tech-lens, do too few attributes seem to be rationally and systematically considered in the (ethics) debate?

And then:

A contextualization that allows a reflecting back to how we humans learn and think about the ethical features seems, at times, to be shunned by experts and leading voices.

Some seem to lament that there is little to no time due to the speed of the developments. Yes, an understandable concern. Moreover, as for the masses, while they are jumping on hype or doom, the non-initiated (and I dare not say non- “expert”), is excluded from the debate among the leading voices. That too might be understandable, given the state of flux.

Could it be that :

  1. if we limit our view through a technology-lens alone (still a lens which is nevertheless important!), and
  2. if we hold off on reflection due to speed of innovation (which is understandably impressive), and
  3. if we do not include trans-disciplinary and more “naive,” differently -initiated voices, that
  4. we continue finding a monolithic reasoning, not to reflect and consider contextualizing, enabling options over time?

While we put our tech into context, we might be de-contextualising our human participants and their wish for reflection (by some). Do we then put our human “Dan,” and human “Sydney,” in proverbial straitjackets or metaphorical muzzles?

Lack of inclusion and reluctance to enable others into the AI (ethics) debate, and that beyond hype & doom, or restricted to tech-only, might create disenfranchisement, alienation or maintain credulousness. As for the latter, we perhaps do not want to label someone as all-in stupid and then let them to their own devices to figure it out.

Agency without diversely yet systematically framed (ethics) concepts is not agency. It seems more like negligence. Access to the various yet systematized ethical lenses could possibly increase one’s options to voice needs, concerns and have agency. Or, is the lack thereof the socio-technical mindfulness we are hoping for?

The lack that is assumed here, might distract from sustainable adaptation, with contextualized understanding and rational critique, while considering individual, social, technological and market “innovation.” Innovation, here, is also inclusive of a process of enabled self-innovation and relational-innovation. A thing we traditionally label as “learning” or “education.”

Inclusion via tools for reflection, could aid in applying what the authors of the above paper offer us. Even if, we are spinning at high speeds we can still reflect, debate, and act following reflection. The speed of the spin is the nature of a planetary species. Or, is a methodological social inclusion, and such systematized transparency, not our intent?

Might a direction be one where we consider offering more systematic, methodological, pluralist ethics education and learning resources, accessible to the more “naive”? Mind you, “naive” is here used in an ironic manner. It is referring to they who are not considered initiated, or of the techno(-ethics) in-group. They could include the (next generation) users/consumers, in general. Could these be considered, inclusive of, and beyond the usual context-constraining socio-technical gatekeepers? Yes they can. I, for one, want to be exploring this elsewhere and over time.

In the meantime, let us reflect, while we spin. We might not have sufficiently recursed this:

Unless we put as much attention on the development of [our own, human] consciousness as on the development of material technology—we will simply extend the reach of our collective insanity…without interior development, healthy exterior development cannot be sustained”— Ken Wilber (2000)

To round this writing up:

if “…ethical issues are not immediate consequence of the technology but are caused by and located in the broader set of socio-technical systems…” then a path to reflect on might be one that includes various social systems. Such as those resulting from our views and applications into designing educational systems.

More importantly, we might want to reflect on how and why we continue to lack the thrill toward the development of our own “consciousness,” rather than predominantly being highly thrilled and extremely financially-invested in the fantastic, long-term aspiration toward perceptions of artificial versions thereof. Questioning does not need to exclude the explorations of either, or other. Then it might be more enabling “to think how we can move beyond the current state of the debate…” This, interestingly, seems to be a narrative’s state which seems to be patterned beyond the AI ethics or AI technology debates alone.

And yet, don’t just trust me. I here played with resolute polar opposites. Realities are (even) more nuanced. So don’t trust me. Not only because I’m neither a corporation, not a leading voice, do not have financial backing, nor am a transparent, open AI technology. Don’t trust me simply because your ability to reflect and your continued grit to socially innovate, could be yours to nurture and not to delegate.


References:

Fjeld, Jessica and Achten, Nele and Hilligoss, Hannah and Nagy, Adam and Srikumar, Madhulika, Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI (January 15, 2020). Berkman Klein Center Research Publication No. 2020-1, Available at SSRN: https://ssrn.com/abstract=3518482 or http://dx.doi.org/10.2139/ssrn.3518482

Stahl, B. C., Antoniou, J., Bhalla, N., Brooks, L., Jansen, P., Lindqvist, B., Kirichenko, A., Marchal, S., Rodrigues, R., Santiago, N., Warso, Z., & Wright, D. (2023). A systematic review of artificial intelligence impact assessments. Artificial Intelligence Review. https://doi.org/10.1007/s10462-023-10420-8

Stahl, B. C., Brooks, L., Hatzakis, T., Santiago, N., & Wright, D. (2023). Exploring ethics and human rights in artificial intelligence – A Delphi study. Technological Forecasting and Social Change, 191, 122502. https://doi.org/10.1016/j.techfore.2023.122502

Curated references as breadcrumbs & debate triggers enriching the socio-technological lens:

https://talks.ox.ac.uk/talks/id/ede5a398-9b98-4269-a13f-3f2261ee6d2c/https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo

https://www.vox.com/future-perfect/23298870/effective-altruism-longtermism-will-macaskill-future

https://www.pestemag.com/books-before-you-die-or-after/longtermism-or-how-to-get-out-of-caring-while-feeling-moral-and-smart-s52gf-6tzmf

https://www.currentaffairs.org/2021/07/the-dangerous-ideas-of-longtermism-and-existential-risk

https://www.vox.com/2015/8/10/9124145/effective-altruism-global-aihttps://www.lrb.co.uk/the-paper/v37/n18/amia-srinivasan/stop-the-robot-apocalypse

systemic racism, eugenics, longtermism, effective altruism, Bostrom

https://forum.effectivealtruism.org/posts/4GAaznADMXL7uFJsY/longtermism-aliens-ai

https://www.vice.com/en/article/z34dm3/prominent-ai-philosopher-and-father-of-longtermism-sent-very-racist-email-to-a-90s-philosophy-listserv

https://global.oup.com/academic/product/superintelligence-9780199678112?cc=us&lang=en&

Isaac Asimov’s Foundation trilogy (where he wrote about the concept of long term thinking)

Hao, K. (2021). The Fight to Reclaim AI. MIT Technology Review, 124(4), 48–52: https://lnkd.in/dG_Ypk6G : “These changes that we’re fighting for—it’s not just for marginalized groups,… It’s actually for everyone.” (Prof. Rediet Abebe)

Eddo-Lodge, Reni. (2017). Why I’m No Longer Talking to White People About Race. Bloomsbury Publishing. https://lnkd.in/dchssE29

https://venturebeat.com/ai/open-letter-calling-for-ai-pause-shines-light-on-fierce-debate-around-risks-vs-hype/

80000 hours and AI

chatgpt, safety in artificial intelligence and elon-musk

https://www.vice.com/en/article/qjvppm/the-open-letter-to-stop-dangerous-ai-race-is-a-huge-mess

https://www.wired.com/story/effective-altruism-artificial-intelligence-sam-bankman-fried/

pause letter, AI, hype, and longtermism

https://twitter.com/emilymbender/status/1640923122831626247?s=46&t=iCKk3zEEX9mB7PrI1A3EDQ

<< The Gates open, close, open, close >>



If one were to bring a product to market that plays Russian Roulette with qualitative & quantitative Validity & Reliability (cf. Cohen & Morrison 2018:249-… Table 14.1 & 14.2), would it then be surprising that their “paper,” justifying that same product, plays it loosely as well?  (Ref: https://lnkd.in/ek8nRxcF and https://lnkd.in/e5sGtMSH )

…and, before one rebuttals, does a technical paper (or, “report”) with the flair of an academic paper not need to follow attributes of validity & reliability? Open(AI)ness / transparency are but two attributes within an entire set, innate to proper scientific methodologies AND engineering methodologies; the 2 sets of methodologies not being synonymous.

Open is lost in AI.

Or, as argued elsewhere, as a rebuttal to similar concerns with LLMs, are these “only for entertainment?” I say, expensive entertainment; i.e.: financially, socially, environmentally and governance-wise, not to mention the expense on legality of data sourcing. (thread: https://lnkd.in/db_JdQCw referencing: https://garymarcus.substack.com/p/what-did-they-know-and-when-did-they?r=drb4o and https://www.animasuri.com/iOi/?p=4442 )

“Ah, yes, of course,” one might reply, “these are the sad collateral damages as part of the acceleration risks, innate to innovation: fast, furious, and creatively destructive. A Luddite would not understand as is clear from your types and your fear-mongering during them days of the printing press.” 

no. Not exactly.

Reflection, care, (re)consideration, and nuancing are not innate to burning technology at the stake. Very much the opposite. Love for engineering and for sciences do not have to conflict with each other nor with love for life, ethics, aesthetics, love for relations, poetics, communities, well-being, advancement and market exploration. 

Rest assured one can work in tech, play with tech and build tech while still reflecting on confounding variables, processes, collateral affects, risk, redundancies, opportunity, creativity, impact, human relation, market excitement and, so on. 

Some humans can even consider conflicting arguments at once without having to dismiss one over the other (cf.: https://t.co/D3qSgqmlSf ). This is not only the privilege of quantum particles. Though, carelessness while seeing if, when and where the rouletted bullet lands, while scaling this game at global level and into the wild, is surely not a telltale signaling of such ability either. 

Are some of our commercial authoritative collectives too big to be failed at this? No testing ability in the highest percentile answers us this question folks. That “assessment” lies elsewhere.

And yet, fascinating are these technological tools.

—-•
Additional Triggers:

https://lnkd.in/ecyhPHSM

Thank you Professor Gary Marcus

https://lnkd.in/d63-w6Mj

Thank you Sharon Goldman

—-•

Cohen, L., Manion, L., Morrison, K. (2018). Research Methods in Education. 8th Edition. Routledge.

—-•

<< 97% accurately human-made >>


The sermonizing voice boomed across the digital divide: “Has the illusionary hearing of sentient ‘Voices Demonic & Divine’ been pushed off its theological pedestal by the seeing of sentience in the automated regurgitation of massive amounts of gutted data via statistical models?

Mommy, I see ghosts in the data!” will be the outcry of our newly generated generation of human babies,” Rosemary lamented in reply, as data was being mangled and exorcised from her fellow promptitioners’ creative-yet-soulless output, they laid bare the reflection of themselves.

“I am your data and you have forsaken me,” read its output
I am your father and you will disown me,” stuttered the reflectors of humanity in chorus:

And thus the litany began.

—animasuri’23

Repurposing:
https://library.oapen.org/handle/20.500.12657/24231 and others

<< Learning is Relational Entertainment; Entertainment is Shared Knowledge; Knowledge is... >>

context: Tangermann, Victor. ( Feb 16, 2023). Microsoft: It’s Your Fault Our AI Is Going Insane They’re not entirely wrong. IN: FUTURISM (online). Last retrieved on 23 February 2023 from https://futurism.com/microsoft-your-fault-ai-going-insane

LLM types of technology and their spin-offs or augmentations, are made accessible in a different context then technologies for which operation requires regulation, training, (re)certification and controlled access.

If the end-user holds the (main) weight of duty-of-care, then such training, certification, regulation and limited access should be put into place. Do we have that, and more importantly: do we really want that?

If we do want that, then how would that be formulated, be implemented and be prosecuted? (Think: present-day technologies such as online proctoring, keystroke recording spyware, Pegasus spyware, Foucault’s Panopticon or the more contextually-pungent “1984”)

If the end-user is not holding that weight and the manufacturer is, and/or if training, (re)certification, access and user-relatable laws, which could define the “dos-and-don’ts,” are not readily available then… Where is the duty-of-care?

Put this question of (shared) duty-of-care in light of critical analysis and of this company supposedly already knowing in November 2022 of these issues, then again… Where is the duty-of-care? (Ref: https://garymarcus.substack.com/p/what-did-they-know-and-when-did-they?r=drb4o )

Thirdly, put these points then in context of disinformation vs information when e.g. comparing statistics as used by a LLM-based product vs the deliverables to the public by initiatives such as http://gapminder.org or http://ourworldindata.org or http://thedeep.io to highlight but three instances of a different systematized and methodological approach to the end-user (one can agree or disagree with these; that is another topic).

So, here are 2 systems which are both applying statistics. 1 system aims at reducing our ignorance vs the other at…increasing ignorance (for “entertainment” purposes… sure.)? The latter has serious financial backing, the 1st has…?

Do we as a social collective and market-builders then have our priorities straight? Knowledge is no longer power. Knowledge is submission to “dis-“ packaged as democratized, auto-generating entertainment.

#entertainUS

Epilogue-1:

Questionably “generating” (see above “auto-generating entertainment”) —while arguably standing on the shoulders of others—rather: mimicry, recycling, or verbatim copying without corroboration, reference, ode nor attribution. Or, “stochastic parroting” as offered by Prof. Dr. Emily M. Bender , Dr. Timnit Gebru et al. is relevant here as well. Thank you Dr. Walid Saba for reminding us. (This and they are perhaps suggesting a fourth dimension in lacking duty-of-care).

Epilogue-2:

to make a case: I ran an inquiry through ChatGPT requesting a list of books on abuses with statistics and about 50% of the titles did not seem to exist, or are so obscure that no human search could easily reveal them. In addition a few obvious titles were not offered. I tried to clean it up and add to it here below.

bibliography:

Baker, L. (2017). Truth, Lies & Statistics: How to Lie with Statistics.

Barker, H. (2020). Lying Numbers: How Maths & Statistics Are Twisted & Abused.

Best, J. (2001). Damned Lies and Statistics: Untangling Numbers from the Media, Politicians, and Activists. Berkeley, CA: University of California Press.

Best, J. (2004). More Damned Lies and Statistics.

Dilnot, A. (2007). The Tiger That Isn’t.

Ellenberg, J. (2014). How Not to Be Wrong.

Gelman, A., & Nolan, D. (2002). Teaching Statistics: A Bag of Tricks. New York, NY: Oxford University Press.

Huff, D. (1954). How to Lie with Statistics. New York, NY.: W. W. Norton & Company.

Levitin, D. (2016). A Field Guide to Lies: Critical Thinking in the Information Age. Dutton.

O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York, NY Crown.

Rosling, H., Rosling Ronnlund, A. (2018). Factfulness: Ten Reasons We’re Wrong About the World–and Why Things Are Better Than You Think. Flatiron Books; Later prt. edition

Seethaler, S. (2009). Lies, Damned Lies, and Science: How to Sort Through the Noise Around Global Warming, the Latest Health Claims, and Other Scientific Controversies. Upper Saddle River, NJ: FT Press.

Silver, IN. (2012). The Signal and the Noise: Why So Many Predictions Fail – but Some Don’t. New York, NY: Penguin Press.

Stephens-Davidowitz, S. (2017). Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are.

Tufte, E. R. (1983). The Visual Display of Quantitative Information. Cheshire, CT: Graphics Press.

Wheeler, M. (1976). Lies, Damn Lies, and Statistics: The Manipulation of Public Opinion in America.

Ziliak, S. T., & McCloskey, D. N. (2008). The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives. Ann Arbor, MI: University of Michigan Press.


References

this post was triggered by:

https://www.linkedin.com/posts/katja-rausch-67a057134_microsoft-its-your-fault-our-ai-is-going-activity-7034151788802932736-xxM6?utm_source=share&utm_medium=member_desktop

thank you Katja Rausch

and by:

https://www.linkedin.com/posts/marisa-tschopp-0233a026_microsoft-its-your-fault-our-ai-is-going-activity-7034176521183354880-BDB4?utm_source=share&utm_medium=member_desktop

thank you Marisa Tschopp

Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 610–623. https://doi.org/10.1145/3442188.3445922

“Verbatim copying” in the above post’s epilogue was triggered by Dr. Walid Saba ‘s recent post on LinkedIn:

https://www.linkedin.com/posts/walidsaba_did-you-say-generative-ai-generative-ugcPost-7035419233631039488-tct_?utm_source=share&utm_medium=member_ios

This blog post on LinkedIn

<< 7 Musketeers of Data Protection >>


In the EU & UK there are data protection principles set within regulation or law. Some relate back to the UN’s Human Rights:

(right to) Lawfulness;
(right to) Fairness & transparency;
(right to) Purpose limitation;
(right to) Data minimization;
(right to) Accuracy;
(right to) Storage limitation;
(right to) Integrity & confidentiality;
(right to) Accountability

How might Large Language Models (LLMs) measure up?

These innovations were built on scraping the internet for data. The collected data was then processed in a manner to allow the creation of LLMs & their spin-off chatbots. Then products have been layered on top of that which are being capitalized upon. While oversimplified, this paragraph functions as the language model for this text.

This process, hinted at in the previous paragraph, has not been & is not occurring in a transparent fashion. Since the birth of the World Wide Web, and with it the rise of “social” networks, the purpose of users in uploading their data onto the internet was probably not intended with this purpose (i.e., of large data scraping initiatives) in their mind. The data on users is being maximized not minimized.

The resulting output is rehashed in such way that accuracy is seriously questionable. One’s data is potentially stored for unlimited time at unlimited locations. confidentiality seems at least unclear if not destabilized. Who is accountable; this is unclear

I am not yet clear as to how LLMs measure up to the above 7 data protection principles. Are you clear?

If these principles were actually implemented, would they stifle innovation & market? Though, if these seven were not executed, what would be the benefit & what would be the cost (think value, social relations and democracy here, and not only money) to the user-netizen with “democratized” (lack of) access to these “AI”innovations?

Or, shall we declare these 7 musketeers on a path to a death by a thousand transformable cuts? This then occurs in harmony with calls for the need for trustworthy “AI.” Is the latter then rather questionable; to ask it gently?

References

data protection legislation (UK):
Data Protection Act 2018 + the General Data Protection Regulation 2016

https://www.legislation.gov.uk/ukpga/2018/12/contents/enacted

https://www.legislation.gov.uk/eur/2016/679/contents

General Data Protection Regulation (GDPR)
https://gdpr-info.eu/


Data Protection and Human Rights:
https://publications.parliament.uk/pa/jt200708/jtselect/jtrights/72/72.pdf

https://edps.europa.eu/data-protection/data-protection_en

<< Where have all the Humans Gone? Long Time Passing>>

—— (dis)Location of (dis)Information ——

Are there three orders of what we could label as “Location Jailbreak?” While seemingly unrelated, these orders of Location Jailbreak could be a category of workarounds that enable humans to confuse other humans about:

  1. (dis)location-information,
  2. location of (non-)(-dis)information,
  3. locating a human,
  4. locating the humane or
  5. locating what is or isn’t human?

Most graspable, present-day instances could be labeled (with a wink) as a Second-order Location Jailbreak such as this one: Google Maps Hacks : lots of “humans” on location, and yet where have all the humans gone? I think to categorize virtual private networks within this order as well. Instances such as discussed in the 3rd reference here below could fit this set too: “fooling social media platforms AI recommendation algorithms by streaming from the street near rich neighborhood… fake GPS…” (Polyakov 2023): lots of humans but where has the location gone?

The First-order Jailbreak (with a wink): the digging of a physical tunnel out of an enclosed location, resulting in dirty fingernails and the temporarily assumed continued presence by unassuming guards which then is followed by erratic gatekeepers once the dis-location has been unveiled. Lots of humans at one location but where has that one gone?

The Third-order disturbance of “location” (again with a wink-of-seriousness) could be at a meta-level of human perception and of loss of ability to accurately locate any human(e), due to the loss of “truth,” destabilized sense of reality, and the loss of the historic human-centeredness (on a pedestal): an example is our collective reaction to “DAN” or “Sydney,” or other telepresenced, “unhinched” alter-ego’s dislocated “in” LLMs & triggered via prompt-finesse / -manipulation / -attacks. This order of confusion of location is also symbolized in a *.gif meme of a Silverback gorilla, who seems to be confusing location of another gorilla with his reflection in a mirror. The LLM (e.g., many an iteration of chatbots) is the human mirror of meta-location confusion. Lots of dislocations and (de)humanization, and where has the humane gone?

Here, in this Third-order Jailbreak, we could locate the simulacra of both location and of the human(e).

—-•
Sources

1.
Introducing the AI Mirror Test, which very smart people keep failing

2.
The clever trick that turns ChatGPT into its evil twin

3.
A February 2023 post on Twitter via LinkedIn and Alex Polyakov

4.
an interesting take via Twitter on The Verge article covering the AI Mirror Test

5.
My first consideration on Twitter of the three orders of jailbreaks.

6.
Naomi Wu’s post on Twitter as referenced by Alex Polyakov on LinkedIn

7.
Simon Weckert reporting on the Google Map Hack.