SymmetricalVoice
@Symmetricalvoic
Linguistics and language conservation. Special attention to the Pacific. run by @peterschuelke
You might like
A taro patch cultivated by some Roviana community members in the Solomon Islands. Blust (ACD) reconstructs *talos as the Proto Oceanic word for taro. There are many modern Oceanic languages which reflect some form of *talos, as taro is an important part of Pacific culture.1/
Linguistics/ ComSci/ CogSci twitter? Where and when did syntax tree originate in the history? I'm asking about the pictorial representation, not the concept, like which paper did it use first? Is it in computer science literature or linguistics literature?
Can we model syntax from speech? Most models of syntax are text-based. Here we propose that basic syntax can be modeled from raw speech. GANs trained on individual words start to concatenate them into multiple-word outputs. Sometimes the model even concatenates three words:
theta roles are just a 'gotcha' for syntacticians, like it's always "what do you mean 'subject'? this verb only assigns a SCORPIO"
Interpretability is one of the main frontiers in AI Doing interpretability work on speech is easier than in vision. Among others, sound is 1D (huge advantage) Here's a presentation of our IEEE/ACM TASLP paper that introduces a technique to interpret layers in audio CNNs
We're hiring! Applications for ELP's summer 2023 internships are now open - if you are passionate about supporting #languagerevitalization, we'd love to hear from you! Learn more about the positions and apply by May 10 at: bit.ly/ELP-interns2023
GPT4 unlike ChatGPT does not hallucinate papers and actually gives useful recommendations. Prompt: “Give me references to papers that measure vowel duration before stops with various laryngeal features”
The link is in document! Join us at "Climbing the WALS: Typological Case Studies in LLMs Probing Using Syntactic Probing Framework"online talk now!
Join us to learn more about our study and the implications of our findings for natural language processing. The link for the event will be placed at this google doc immediately before the event docs.google.com/document/d/1Ls…
Join us as we learn #Ghanongga #Ganoqa youtu.be/bK5NqUX8Bfk
youtube.com
YouTube
Ganoqa 15: learning Ganoqa language from Solomon Islands
1/7 Discussing low-resource languages we highlighted that the amount of the language data doesn’t correlate with the number of their speakers. The number of speakers isn’t the main factor that affects language vitality, the other factors also play a part. Which ones? The 🧵
We discussed a lot how to tackle the lack of data in NLP, but why this lack appears? In this thread, we will look at the concept of “”low-resourcedness” and try to understand the reasons of it 🧵
Uncovering the unknowns in sperm whale communication🐳 using deep learning We introduce a novel interpretability technique (CDEV) that combines generative deep models and causal inference methods. Our model discovers meaningful properties, both novel and previously hypothesized
#OTD 146 years ago, Wilhelm Doegen (1877-1967) was born 🥳 Expert on phonetics, phonology, and the inventor of a speech recording device known as Doegen-Lautapparat (1909). During WW1, he recorded speech samples and songs of PoWs in diverse languages. #LinguisticBirthdays #Histlx
Animals in Kriol where the etymology is the English plural: hosis = horse (from 'horses') ens = ant (from 'ants') I reckon there are a couple more examples that will come to me...
"John is too stubborn to talk to" is the new "colorless green ideas sleep furiously" Prompt: Draw a minimalist syntactic analysis of the sentence "John is too stubborn to talk to" GPT4:
GPT4 Prompt: draw an OT Tableau for the process /anta/ > [anda]
GPT4 draws a syntactic tree for the sentence: "I saw an elephant with glasses" 🐘😎 Interesting timing given a recent opinion piece in @nytimes
Not a huge Chomsky fan myself, but can we please stop with the LLMs disprove Chomsky thing? Humans are not computers. We don't know that computers learn grammar in the same way, we don't know that humans can learn like computers, and LLMs require significantly more data.
GPT4 draws a syntactic tree for the sentence: "I saw an elephant with glasses" 🐘😎 Interesting timing given a recent opinion piece in @nytimes
Here's a look at our poster from #HSP2023! Adults who spend many hours reading may read object relative clauses faster, BUT this effect is modulated by the syntactic complexity of the text (websites) they frequently read
One advantage of speech-based models trained on smaller datasets is that we can observe how the network learns as it is being trained (unlike in text-based GPT) and compare “machine acquisition” to language acquisition stages They can be are very similar youtu.be/oClrwnh0DLc
United States Trends
- 1. Max B 3,898 posts
- 2. Good Sunday 64.8K posts
- 3. Doran 80K posts
- 4. #Worlds2025 122K posts
- 5. Faker 93.7K posts
- 6. #T1WIN 69.2K posts
- 7. #sundayvibes 4,914 posts
- 8. #AskBetr N/A
- 9. Full PPR N/A
- 10. SILVER SCRAPES 4,306 posts
- 11. O God 8,043 posts
- 12. Sunday Funday 2,874 posts
- 13. #sundaymotivation 1,753 posts
- 14. Blessed Sunday 17.5K posts
- 15. The Wave 62.6K posts
- 16. Guma 17.7K posts
- 17. Alec Pierce N/A
- 18. Parker Washington N/A
- 19. Oner 30.1K posts
- 20. Keria 39.3K posts
Something went wrong.
Something went wrong.