. It’s an audio-in, audio-out model. So after providing it with a dolphin vocalization, the model does just what human-centric language models do—it predicts the next token. If it works anything like a standard LLM, those predicted tokens could be sounds that a dolphin would understand.
It’s a cool tech application, but all they’re technically doing right now is training an AI to sound like dolphins… Unless they can somehow convert this to actual meaning/human language, I feel like we’re just going to end up with an equally incomprehensible Large Dolphin Language Model.
Don’t LLMs work on text though? Speech to text is a separate process that has its output fed to an LLM? Even when you integrate them more closely to do stuff like figure out words based on context clues, wouldn’t that amount to “here’s a text list of possible words, which would make the most sense”?
What counts as a “token” in a purely audio based model?
Unless they can somehow convert this to actual meaning/human language, I feel like we’re just going to end up with an equally incomprehensible Large Dolphin Language Model.
I guess the next step would be associating those sounds with the Dolphins’ actions. Similar to how we would learn the language of people we’ve never contacted before.
I do not know enough about the intricacies of differences in AI text and audio-only models. Though I know we already have audio-only models that do work basically the same way.
I guess the next step would be associating those sounds with the Dolphins’ actions
Yeah but, we’re already trying to do this. I’m not sure how the AI step really helps. We can already hear dolphins, isolate specific noises, and associate them actions, but we still haven’t gotten very far. Having a machine that can replicate those noises without doing the actions sounds significantly less helpful compared to watching a dolphin.
I assume phonemes would be the tokens. We can already computer generate the audio of spoken language, seems like the tough part here is figuring out what the dolphin sounds actually mean. Especially when we don’t have native speakers available to correct the machine outputs as the model is trained.
An emergent behavior of LLMs is the ability to translate between languages. IE, we taught something Spanish, and we taught it English, and it automatically knows how to translate between them. If we taught it English and dolphin, it should be able to translate anything with shared meaning.
Is it emergent?! I’ve never seen this claim. Where did you see or read this? Do you mean by this that it can just work in any trained language and accept/return tokens based on the language input and/or requested?
Yeah that article is so full of bullshit that I don’t believe it’s main claim. Comparing LLM’s to understanding built by children, saying it makes “creative content”, that LLM’s do “chain of thought” without prompting. It presents the two sides as at all equal in logical reasoning: as if the mystical intepretation is on the same level of rigor as the systems explanation. Sorry, but I’m entirely unconvinced by this article that I should take this seriously. There are thousands of websites that do translation with natural language taking examples from existing media and such (duolingo did this for a long time, and sold those results), literally just mining that data gives the basis to easily build a network of translations that seem like natural language with no mysticism
Assuming this is an emergent property of llms (and not a result of getting lucky with what pieces of the training data were memorized in the model weights), it has thus far only been demonstrated with human language.
Does dolphin language share enough homology with human language in terms of embedded representations of the utterances (clicks?)? Maybe llms are a useful tool to start probing these questions but it seems excessively optimistic and ascientific to expect a priori that training an LLM of any type - especially a sensorily unimodal one - on non-human sounds would produce a functional translator
Knowing the individual dolphins involved is crucial for accurate interpretation. The ultimate goal of this observational work is to understand the structure and potential meaning within these natural sound sequences — seeking patterns and rules that might indicate language.
It’s a cool tech application, but all they’re technically doing right now is training an AI to sound like dolphins… Unless they can somehow convert this to actual meaning/human language, I feel like we’re just going to end up with an equally incomprehensible Large Dolphin Language Model.
Dolphin-speak: iiiiiiiiiiiiiiii gggggggggggggggggrrrrrreeeeeee tzzttzzttzzttzzt nnnt-nnnt-nnnt-nnnt-nnnt brrrrrrt mwahwahwahwahwah
English: fish want want swim water swam down—down—down fish prehensile penis
Don’t LLMs work on text though? Speech to text is a separate process that has its output fed to an LLM? Even when you integrate them more closely to do stuff like figure out words based on context clues, wouldn’t that amount to “here’s a text list of possible words, which would make the most sense”?
What counts as a “token” in a purely audio based model?
I guess the next step would be associating those sounds with the Dolphins’ actions. Similar to how we would learn the language of people we’ve never contacted before.
I do not know enough about the intricacies of differences in AI text and audio-only models. Though I know we already have audio-only models that do work basically the same way.
Yeah but, we’re already trying to do this. I’m not sure how the AI step really helps. We can already hear dolphins, isolate specific noises, and associate them actions, but we still haven’t gotten very far. Having a machine that can replicate those noises without doing the actions sounds significantly less helpful compared to watching a dolphin.
I assume phonemes would be the tokens. We can already computer generate the audio of spoken language, seems like the tough part here is figuring out what the dolphin sounds actually mean. Especially when we don’t have native speakers available to correct the machine outputs as the model is trained.
An emergent behavior of LLMs is the ability to translate between languages. IE, we taught something Spanish, and we taught it English, and it automatically knows how to translate between them. If we taught it English and dolphin, it should be able to translate anything with shared meaning.
deleted by creator
Is it emergent?! I’ve never seen this claim. Where did you see or read this? Do you mean by this that it can just work in any trained language and accept/return tokens based on the language input and/or requested?
I mean, we don’t have to teach them to translate. That was unexpected by people, but not really everyone.
https://www.asapdrew.com/p/ai-emergence-emergent-behaviors-artificial-intelligence
Yeah that article is so full of bullshit that I don’t believe it’s main claim. Comparing LLM’s to understanding built by children, saying it makes “creative content”, that LLM’s do “chain of thought” without prompting. It presents the two sides as at all equal in logical reasoning: as if the mystical intepretation is on the same level of rigor as the systems explanation. Sorry, but I’m entirely unconvinced by this article that I should take this seriously. There are thousands of websites that do translation with natural language taking examples from existing media and such (duolingo did this for a long time, and sold those results), literally just mining that data gives the basis to easily build a network of translations that seem like natural language with no mysticism
It can translate because the languages have already been translated. No amount of scraping websites can translate human language to dolphin.
Assuming this is an emergent property of llms (and not a result of getting lucky with what pieces of the training data were memorized in the model weights), it has thus far only been demonstrated with human language.
Does dolphin language share enough homology with human language in terms of embedded representations of the utterances (clicks?)? Maybe llms are a useful tool to start probing these questions but it seems excessively optimistic and ascientific to expect a priori that training an LLM of any type - especially a sensorily unimodal one - on non-human sounds would produce a functional translator
Moreover, from deepmind’s writeup on the topic: