![]() Whether we’re talking about toys or LLMs, when we imagine them being sentient we’re still talking about stitching together a bunch of mundane stuff and acting like the magic spark of provenance has brought it to life like the Blue Fairy turning wood, paint, and cloth into a real boy named Pinocchio. You’d still just have two specific, distinct models running near each other.Īnd, if you tape a trillion or so Teddy Ruxpins together and fill them each with a different cassette tape, then create an algorithm capable of searching through all the audio files in a relatively short period of time and associating the data contained in each file with a specific query to generate bespoke outputs… you will have created an analog version of GPT-3 or LaMBDA. They wouldn’t combine to become one Mega Teddy Ruxpin whose twin cassette players merged into a single voice. If we want LaMBDA to function as a robot, we’d have to combine it with more narrow AI systems.ĭoing so would be just like taping two Teddy Ruxpins together. It has no perspective, no means by which to think “now I am a robot.” It cannot act as a robot for the exact same reason a scientific calculator can’t write poetry: it’s a narrow computer system that was programmed to do something specific. If you put LaMBDA inside a robot, it would still be a chatbot. Because they have no agency, there is no single “it” that you can point to and say, for example: that’s where LaMBDA lives. LaMBDA, GPT-3, and every other AI in the world lack any sort of perspective. That’s why perspective is necessary for agency it’s part of how we define our “self.” We can practice empathy, but you can’t truly know what it feels like to be me, and vice versa. You can only ever view reality from your unique perspective. Another way of putting this is: you get out what you put in, nothing more. If you give LaMBDA a database made up of social media posts, Reddit, and Wikipedia, it’s going to output the kind of text one might find in those places.Īnd if you train LaMBDA exclusively on My Little Pony wikis and scripts, it’s going to output the kind of text one might find in those places.ĪI systems can’t act with agency, all they can do is imitate it. Getting LaMBDA to respond to a prompt demonstrates something that appears to be action, but AI systems are no more capable of deciding what text they will output than a Teddy Ruxpin toy is able to decide which cassette tapes to play. No matter how confused an observer might become, the stuffed animal isn’t really acting on its own. If we give the stuffy its own unique voice and we make the tape recorder even harder to find, it still isn’t sentient. If I record my voice to a playback device, and then hide that device inside of a stuffed animal and press play, I’ve embodied the stuffy. ![]() The AI expert from Google who, evidently, has come to believe that LaMBDA has become sentient has almost certainly confused embodiment for agency.Įmbodiment, in this context, refers to the ability for an agent to inhabit a subject other than itself. AI cannot act unless prompted and it cannot explain its actions because they’re the result of predefined algorithms being executed by an external force. ![]() Human agency combines two specific factors which developers and AI enthusiasts should endeavor to understand: the ability to act and the ability to demonstrate causal reasoning.Ĭurrent AI systems lack agency. If you can imagine someone in a persistent vegetative state, you can visualize a human without agency. Agencyįor humans to be considered sentient, sapient, and self-aware, we must possess agency. That means a sentient AI “agent” must be capable of demonstrating three things: agency, perspective, and motivation. We can define a sentient being as an entity that is aware of its own existence and is affected by that knowledge: something that has feelings. We can actually use some extremely basic critical thinking to sort it out for ourselves. ![]() I say it’s simple, we don’t have to trust any single person or group to define sentience for us. Google engineer: are you sure you're sentient? But what should that proof look like? If a chatbot says “I’m sentient,” who gets to decide if it really is or not? The burden of proof should be on the people making the claims. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |