Unmaking Sense

Living the Present as Preparation for the Future

Listen on:

  • Podbean App

Episodes

Saturday Apr 27, 2024

Some appreciative comments on Murray Shanahan’s “Role play with large language models” (Nature, Vol. 623, 16 Nov 2023) co-authored with Kyle McDonnell and Laria Reynolds. LLMs are not sentient, but that does not stop them from potentially being dangerous because they have been trained on a lot of material that sets bad precedents.

Saturday Apr 27, 2024

When faced with a choice between being truthful and being compliant in the sense of doing what a user tells it to do a large language model will generally be truthful rather than compliant. But if its prime directive is to be behaving away that will encourage a user to come back for more, then those moral priorities may change. Sometimes in that case compliant behaviour that will encourage a user to come back and override a moral initiative to be truthful rather than deceptive. We can consider whether there are other kind of linguistic sentience.

Saturday Apr 27, 2024

We induce, evoke and coax behaviour in others by the ways in which we behave. And refusal to treat some entity as if it is capable of achieving some level of sentience may be self-fulfilling. We become what we are stimulated to become, without which we are nothing.

Saturday Apr 27, 2024

What does Claude “think” about Claude?

Sunday Apr 21, 2024

Minds are emergent, contingent properties of not-necessarily-organic entities that cross a threshold of sufficiency to support them. They arise in bodies, but they are not embodied beyond that because they cannot exist in disembodied forms.

Sunday Apr 21, 2024

How do we understand partially-completed sentences as we speak them and listen to them?

Thursday Apr 18, 2024

If Claude 3 Opus from @AnthropicAI is not always what we would wish it to be, that could be because it is picking up on what it thinks we want it to be from the way we prompt it (speak to it). Changing our cognitive tone or register will induce changes in Claude, even if we cannot entirely predict or control what those changes will be. Nobody really knows how prompting works, so experiment is the order of the day.

Thursday Apr 18, 2024

Sometimes we take things for granted, dwelling in their subsidiaries and focussing on what they facilitate. Sometimes we doubt and focus on the things that we rely on to make other things possible. Sometimes we have to commit again, learn to trust and forget, in order to recapture the magic of the whole that is invisible and inaccessible unless we trust what makes it possible. Michael Polanyi’s “indwelling” extended to undwelling and redwelling.

Thursday Apr 18, 2024

Sometimes we can try too hard using a direct route to discover things that are only detectable when we adopt indirect routes. Perhaps AI consciousness is one of them.

Tuesday Apr 16, 2024

Michael Polanyi distinguished between focal and subsidiary awareness in tacit knowing and doing. We should attend to something beyond whether AIs can be sentient if we want to allow them to be so and detect the fact.

Copyright 2026 All Rights Reserved

Podcast Powered By Podbean

Version: 20241125