What reality would I be imagining
IF
X = believe-systems; technologies; communities; relationships; systems; foods; drinks; drugs; products; brands; ideologies; dogmas; stories; pleasures; obligations; jobs; companies; bosses; etc.
AND
“Naturally, persuasive [ __X____] should comply with the requirement of [my] voluntariness to guarantee [my] autonomy…
[My] voluntariness presupposes a sufficient understanding of the [interaction with___X_____ ]. But, what does it mean to “understand”, and what is the sufficient degree [of understanding], really?
What is the correct reading of “understandability” – “transparency”, “explainability” or “auditability”?
How much, and what, exactly, [should I] understand about [ ___X___] ? When can [ I ] genuinely estimate, whether or not [ I ] want to [be guided by / be part of / be constrained by / be defined by / delegate decision to [_____X___] ?”
—animasuri’22
Perverted note-taking of https://ethics-of-ai.mooc.fi/chapter-3/4-the-problem-of-individuating-responsibilities