#instructiontuning search results

Streamline by removing the constitution prompt, leaving only instructions. Results in a dataset of 250k instruction-response pairs! Finally, this dataset trains a LORA model, which leads to the dev of the instruction-tuned Dromedary 🐪. #InstructionTuning #Dromedary (3/4)🧵

ByteWiseWizard's tweet image. Streamline by removing the constitution prompt, leaving only instructions. Results in a dataset of 250k instruction-response pairs! Finally, this dataset trains a LORA model, which leads to the dev of the instruction-tuned Dromedary 🐪. #InstructionTuning #Dromedary (3/4)🧵

Two types of large language models (LLMs): - Base LLMs: Predict the next word based on training data. - Instruction tuned LLMs: Follow given instructions to answer questions or complete tasks. Which type of LLM is right for you? #AI #LLMs #instructiontuning


A thought-provoking study sheds light on the inherent limitations of Instruction Tuning (IT) in the realm of conversational large language models (LLMs) and the implications on knowledge enhancement. #LargeLanguageModels #InstructionTuning #KnowledgeEnhancement

GoatstackAI's tweet image. A thought-provoking study sheds light on the inherent limitations of Instruction Tuning (IT) in the realm of conversational large language models (LLMs) and the implications on knowledge enhancement. #LargeLanguageModels #InstructionTuning #KnowledgeEnhancement

Finally got to play with #dalle3 a bit. Super fun! Great quality, but still doesn't follow instructions faithfully. But maybe image models do not need to be as instruction-tuned as language models to be useful? Wdyt? Full post: maximalmargin.com/image_if/ #instructiontuning

infoxiao's tweet image. Finally got to play with #dalle3 a bit. Super fun! Great quality, but still doesn't follow instructions faithfully. But maybe image models do not need to be as instruction-tuned as language models to be useful? Wdyt?  

Full post: maximalmargin.com/image_if/

#instructiontuning

Finally got my hands on #dalle3 and did my own 'a horse riding an astronaut' test -- using Sol LeWitt's instruction-based art as a framework. Here are my findings: 1/🧵



Enhancing Instruction Tuning in LLMs: A Diversity-Aware Data Selection Strategy Using Sparse Autoencoders #InstructionTuning #DataDiversity #SparseAutoencoders #AIResearch #MachineLearning itinai.com/enhancing-inst…

vlruso's tweet image. Enhancing Instruction Tuning in LLMs: A Diversity-Aware Data Selection Strategy Using Sparse Autoencoders

 #InstructionTuning #DataDiversity #SparseAutoencoders #AIResearch #MachineLearning

itinai.com/enhancing-inst…

🚀New paper at #ACL2025 Findings! Instruction-Tuning Data Synthesis from Scratch via Web Reconstruction We propose WebR, a fully automated framework that turns raw web docs into high-quality instruction-tuning data — no seed data, minimal assumptions! #LLM #InstructionTuning

Yuxin_Jiang_'s tweet image. 🚀New paper at #ACL2025 Findings!

Instruction-Tuning Data Synthesis from Scratch via Web Reconstruction

We propose WebR, a fully automated framework that turns raw web docs into high-quality instruction-tuning data — no seed data, minimal assumptions!
#LLM #InstructionTuning

ローカルのGeForce RTX4070でInstruction Tuning開始した。学習完了に何時間かかることやら、、 取り敢えず、PCのスリープモードを解除。 明日の朝、メモリが匙投げていなことを祈る。 気持ちよく仕事始めさせてー #InstructionTuning #LLM #CUDA #torch

industrial_ds's tweet image. ローカルのGeForce RTX4070でInstruction Tuning開始した。学習完了に何時間かかることやら、、

取り敢えず、PCのスリープモードを解除。
明日の朝、メモリが匙投げていなことを祈る。
気持ちよく仕事始めさせてー

#InstructionTuning #LLM #CUDA #torch

2/ Instruction Tuning review: FLAN importance misunderstood. Short responses ≠ lack of value. Alternative models considered. #InstructionTuning #AI yaofu.notion.site/June-2023-A-St…


[11/13] The release of improved OLMo 2 Instruct models, tested via the Tülu 3 evaluation suite, showcases their strength in knowledge recall and reasoning, outperforming models like Qwen 2.5 14B. #AIModels #InstructionTuning


🎉 New paper alert! 🚀 BioInstruct: Instruction Tuning of Large Language Models for Biomedical NLP 🌟 A novel instruction-tuning dataset to push the limits of biomedical LMs. Work done by @hieutm_81 @YangZhichaoNLP @YaoZonghai and Prof. Hong Yu #BioNLP #InstructionTuning


I agree with the study that automatic data selection in instruction tuning is a key factor for successful optimization. #AutomaticDataSelection #InstructionTuning


Work started as part of my research internship at @AdobeResearch (@GauthamMysore’s team) and completed at at @gammaumd advised by @dmanocha #LLM #llm #instructiontuning #AI #nlp #GenerativeAI


. @Swarooprm7 is the father of instruction tuning. Swaroop Mishra is amazing, the depth of his intellect is barely fathomed. #AI #InstructionTuning

most of the indian ‘ai influencers’ on twitter are really not worth following. they’re sloppy and lack real technical rigour. it’s really easy to fake rigour online. it’s also easy to see through the bullshit if you just try to.



🎉 New paper alert! Large Language Models are In-context Teachers for Knowledge Reasoning #EMNLP24 finding 🔗 Read the paper: arxiv.org/abs/2311.06985 Work done by @jcz12856876 @YaoZonghai @YangZhichaoNLP and Prof. Hong Yu #BioNLP #InstructionTuning (0/N)


Master AI with expert Instruction Tuning. Markovate optimizes data for precise, effective solutions, ensuring your AI models excel. #AITuning #InstructionTuning shorturl.at/7SYa3


. @Swarooprm7 is the father of instruction tuning. Swaroop Mishra is amazing, the depth of his intellect is barely fathomed. #AI #InstructionTuning

most of the indian ‘ai influencers’ on twitter are really not worth following. they’re sloppy and lack real technical rigour. it’s really easy to fake rigour online. it’s also easy to see through the bullshit if you just try to.



🚀New paper at #ACL2025 Findings! Instruction-Tuning Data Synthesis from Scratch via Web Reconstruction We propose WebR, a fully automated framework that turns raw web docs into high-quality instruction-tuning data — no seed data, minimal assumptions! #LLM #InstructionTuning

Yuxin_Jiang_'s tweet image. 🚀New paper at #ACL2025 Findings!

Instruction-Tuning Data Synthesis from Scratch via Web Reconstruction

We propose WebR, a fully automated framework that turns raw web docs into high-quality instruction-tuning data — no seed data, minimal assumptions!
#LLM #InstructionTuning

Enhancing Instruction Tuning in LLMs: A Diversity-Aware Data Selection Strategy Using Sparse Autoencoders #InstructionTuning #DataDiversity #SparseAutoencoders #AIResearch #MachineLearning itinai.com/enhancing-inst…

vlruso's tweet image. Enhancing Instruction Tuning in LLMs: A Diversity-Aware Data Selection Strategy Using Sparse Autoencoders

 #InstructionTuning #DataDiversity #SparseAutoencoders #AIResearch #MachineLearning

itinai.com/enhancing-inst…

[11/13] The release of improved OLMo 2 Instruct models, tested via the Tülu 3 evaluation suite, showcases their strength in knowledge recall and reasoning, outperforming models like Qwen 2.5 14B. #AIModels #InstructionTuning


ローカルのGeForce RTX4070でInstruction Tuning開始した。学習完了に何時間かかることやら、、 取り敢えず、PCのスリープモードを解除。 明日の朝、メモリが匙投げていなことを祈る。 気持ちよく仕事始めさせてー #InstructionTuning #LLM #CUDA #torch

industrial_ds's tweet image. ローカルのGeForce RTX4070でInstruction Tuning開始した。学習完了に何時間かかることやら、、

取り敢えず、PCのスリープモードを解除。
明日の朝、メモリが匙投げていなことを祈る。
気持ちよく仕事始めさせてー

#InstructionTuning #LLM #CUDA #torch

🎉 New paper alert! Large Language Models are In-context Teachers for Knowledge Reasoning #EMNLP24 finding 🔗 Read the paper: arxiv.org/abs/2311.06985 Work done by @jcz12856876 @YaoZonghai @YangZhichaoNLP and Prof. Hong Yu #BioNLP #InstructionTuning (0/N)


🎉 New paper alert! 🚀 BioInstruct: Instruction Tuning of Large Language Models for Biomedical NLP 🌟 A novel instruction-tuning dataset to push the limits of biomedical LMs. Work done by @hieutm_81 @YangZhichaoNLP @YaoZonghai and Prof. Hong Yu #BioNLP #InstructionTuning


Check out the full paper for more details and insights! #LLM #InstructionTuning #AI #ML #NLP arxiv.org/abs/2410.05248


Master AI with expert Instruction Tuning. Markovate optimizes data for precise, effective solutions, ensuring your AI models excel. #AITuning #InstructionTuning shorturl.at/7SYa3


A thought-provoking study sheds light on the inherent limitations of Instruction Tuning (IT) in the realm of conversational large language models (LLMs) and the implications on knowledge enhancement. #LargeLanguageModels #InstructionTuning #KnowledgeEnhancement

GoatstackAI's tweet image. A thought-provoking study sheds light on the inherent limitations of Instruction Tuning (IT) in the realm of conversational large language models (LLMs) and the implications on knowledge enhancement. #LargeLanguageModels #InstructionTuning #KnowledgeEnhancement

No results for "#instructiontuning"

Streamline by removing the constitution prompt, leaving only instructions. Results in a dataset of 250k instruction-response pairs! Finally, this dataset trains a LORA model, which leads to the dev of the instruction-tuned Dromedary 🐪. #InstructionTuning #Dromedary (3/4)🧵

ByteWiseWizard's tweet image. Streamline by removing the constitution prompt, leaving only instructions. Results in a dataset of 250k instruction-response pairs! Finally, this dataset trains a LORA model, which leads to the dev of the instruction-tuned Dromedary 🐪. #InstructionTuning #Dromedary (3/4)🧵

A thought-provoking study sheds light on the inherent limitations of Instruction Tuning (IT) in the realm of conversational large language models (LLMs) and the implications on knowledge enhancement. #LargeLanguageModels #InstructionTuning #KnowledgeEnhancement

GoatstackAI's tweet image. A thought-provoking study sheds light on the inherent limitations of Instruction Tuning (IT) in the realm of conversational large language models (LLMs) and the implications on knowledge enhancement. #LargeLanguageModels #InstructionTuning #KnowledgeEnhancement

ローカルのGeForce RTX4070でInstruction Tuning開始した。学習完了に何時間かかることやら、、 取り敢えず、PCのスリープモードを解除。 明日の朝、メモリが匙投げていなことを祈る。 気持ちよく仕事始めさせてー #InstructionTuning #LLM #CUDA #torch

industrial_ds's tweet image. ローカルのGeForce RTX4070でInstruction Tuning開始した。学習完了に何時間かかることやら、、

取り敢えず、PCのスリープモードを解除。
明日の朝、メモリが匙投げていなことを祈る。
気持ちよく仕事始めさせてー

#InstructionTuning #LLM #CUDA #torch

Enhancing Instruction Tuning in LLMs: A Diversity-Aware Data Selection Strategy Using Sparse Autoencoders #InstructionTuning #DataDiversity #SparseAutoencoders #AIResearch #MachineLearning itinai.com/enhancing-inst…

vlruso's tweet image. Enhancing Instruction Tuning in LLMs: A Diversity-Aware Data Selection Strategy Using Sparse Autoencoders

 #InstructionTuning #DataDiversity #SparseAutoencoders #AIResearch #MachineLearning

itinai.com/enhancing-inst…

🚀New paper at #ACL2025 Findings! Instruction-Tuning Data Synthesis from Scratch via Web Reconstruction We propose WebR, a fully automated framework that turns raw web docs into high-quality instruction-tuning data — no seed data, minimal assumptions! #LLM #InstructionTuning

Yuxin_Jiang_'s tweet image. 🚀New paper at #ACL2025 Findings!

Instruction-Tuning Data Synthesis from Scratch via Web Reconstruction

We propose WebR, a fully automated framework that turns raw web docs into high-quality instruction-tuning data — no seed data, minimal assumptions!
#LLM #InstructionTuning

Finally got to play with #dalle3 a bit. Super fun! Great quality, but still doesn't follow instructions faithfully. But maybe image models do not need to be as instruction-tuned as language models to be useful? Wdyt? Full post: maximalmargin.com/image_if/ #instructiontuning

infoxiao's tweet image. Finally got to play with #dalle3 a bit. Super fun! Great quality, but still doesn't follow instructions faithfully. But maybe image models do not need to be as instruction-tuned as language models to be useful? Wdyt?  

Full post: maximalmargin.com/image_if/

#instructiontuning

Finally got my hands on #dalle3 and did my own 'a horse riding an astronaut' test -- using Sol LeWitt's instruction-based art as a framework. Here are my findings: 1/🧵



Loading...

Something went wrong.


Something went wrong.


United States Trends