#instructiontuning search results
Streamline by removing the constitution prompt, leaving only instructions. Results in a dataset of 250k instruction-response pairs! Finally, this dataset trains a LORA model, which leads to the dev of the instruction-tuned Dromedary 🐪. #InstructionTuning #Dromedary (3/4)🧵
Two types of large language models (LLMs): - Base LLMs: Predict the next word based on training data. - Instruction tuned LLMs: Follow given instructions to answer questions or complete tasks. Which type of LLM is right for you? #AI #LLMs #instructiontuning
A thought-provoking study sheds light on the inherent limitations of Instruction Tuning (IT) in the realm of conversational large language models (LLMs) and the implications on knowledge enhancement. #LargeLanguageModels #InstructionTuning #KnowledgeEnhancement
Finally got to play with #dalle3 a bit. Super fun! Great quality, but still doesn't follow instructions faithfully. But maybe image models do not need to be as instruction-tuned as language models to be useful? Wdyt? Full post: maximalmargin.com/image_if/ #instructiontuning
Finally got my hands on #dalle3 and did my own 'a horse riding an astronaut' test -- using Sol LeWitt's instruction-based art as a framework. Here are my findings: 1/🧵
Enhancing Instruction Tuning in LLMs: A Diversity-Aware Data Selection Strategy Using Sparse Autoencoders #InstructionTuning #DataDiversity #SparseAutoencoders #AIResearch #MachineLearning itinai.com/enhancing-inst…
🚀New paper at #ACL2025 Findings! Instruction-Tuning Data Synthesis from Scratch via Web Reconstruction We propose WebR, a fully automated framework that turns raw web docs into high-quality instruction-tuning data — no seed data, minimal assumptions! #LLM #InstructionTuning
ローカルのGeForce RTX4070でInstruction Tuning開始した。学習完了に何時間かかることやら、、 取り敢えず、PCのスリープモードを解除。 明日の朝、メモリが匙投げていなことを祈る。 気持ちよく仕事始めさせてー #InstructionTuning #LLM #CUDA #torch
2/ Instruction Tuning review: FLAN importance misunderstood. Short responses ≠ lack of value. Alternative models considered. #InstructionTuning #AI yaofu.notion.site/June-2023-A-St…
[11/13] The release of improved OLMo 2 Instruct models, tested via the Tülu 3 evaluation suite, showcases their strength in knowledge recall and reasoning, outperforming models like Qwen 2.5 14B. #AIModels #InstructionTuning
🎉 New paper alert! 🚀 BioInstruct: Instruction Tuning of Large Language Models for Biomedical NLP 🌟 A novel instruction-tuning dataset to push the limits of biomedical LMs. Work done by @hieutm_81 @YangZhichaoNLP @YaoZonghai and Prof. Hong Yu #BioNLP #InstructionTuning
I agree with the study that automatic data selection in instruction tuning is a key factor for successful optimization. #AutomaticDataSelection #InstructionTuning
Work started as part of my research internship at @AdobeResearch (@GauthamMysore’s team) and completed at at @gammaumd advised by @dmanocha #LLM #llm #instructiontuning #AI #nlp #GenerativeAI
. @Swarooprm7 is the father of instruction tuning. Swaroop Mishra is amazing, the depth of his intellect is barely fathomed. #AI #InstructionTuning
A key factor for ai research in mathematical reasoning is that it can further increase models' understanding and problem-solving capabilities for complex mathematical problems.... #Dataset #InfinityMath #instructiontuning #mathematical technicalterrence.com/tech/ai/infini…
technicalterrence.com
InfinityMath: A scalable instruction-tuning dataset for programmatic mathematical reasoning
A key factor for ai research in mathematical reasoning is that it can further increase models' understanding and problem-solving capabilities for complex m
🎉 New paper alert! Large Language Models are In-context Teachers for Knowledge Reasoning #EMNLP24 finding 🔗 Read the paper: arxiv.org/abs/2311.06985 Work done by @jcz12856876 @YaoZonghai @YangZhichaoNLP and Prof. Hong Yu #BioNLP #InstructionTuning (0/N)
Master AI with expert Instruction Tuning. Markovate optimizes data for precise, effective solutions, ensuring your AI models excel. #AITuning #InstructionTuning shorturl.at/7SYa3
Discover how tweaking prompt token weights impacts LLM performance in instruction tuning. Learn about the effects on next-token prediction and the role of Cross Entropy Loss in fine-tuning. #LLM #AI #InstructionTuning towardsdatascience.com/to-mask-or-not…
towardsdatascience.com
To Mask or Not to Mask: The Effect of Prompt Tokens on Instruction Tuning | Towards Data Science
Implementing prompt-loss-weight, and why we should replace prompt-masking with prompt-weighting
. @Swarooprm7 is the father of instruction tuning. Swaroop Mishra is amazing, the depth of his intellect is barely fathomed. #AI #InstructionTuning
🚀New paper at #ACL2025 Findings! Instruction-Tuning Data Synthesis from Scratch via Web Reconstruction We propose WebR, a fully automated framework that turns raw web docs into high-quality instruction-tuning data — no seed data, minimal assumptions! #LLM #InstructionTuning
Enhancing Instruction Tuning in LLMs: A Diversity-Aware Data Selection Strategy Using Sparse Autoencoders #InstructionTuning #DataDiversity #SparseAutoencoders #AIResearch #MachineLearning itinai.com/enhancing-inst…
[11/13] The release of improved OLMo 2 Instruct models, tested via the Tülu 3 evaluation suite, showcases their strength in knowledge recall and reasoning, outperforming models like Qwen 2.5 14B. #AIModels #InstructionTuning
ローカルのGeForce RTX4070でInstruction Tuning開始した。学習完了に何時間かかることやら、、 取り敢えず、PCのスリープモードを解除。 明日の朝、メモリが匙投げていなことを祈る。 気持ちよく仕事始めさせてー #InstructionTuning #LLM #CUDA #torch
🎉 New paper alert! Large Language Models are In-context Teachers for Knowledge Reasoning #EMNLP24 finding 🔗 Read the paper: arxiv.org/abs/2311.06985 Work done by @jcz12856876 @YaoZonghai @YangZhichaoNLP and Prof. Hong Yu #BioNLP #InstructionTuning (0/N)
🎉 New paper alert! 🚀 BioInstruct: Instruction Tuning of Large Language Models for Biomedical NLP 🌟 A novel instruction-tuning dataset to push the limits of biomedical LMs. Work done by @hieutm_81 @YangZhichaoNLP @YaoZonghai and Prof. Hong Yu #BioNLP #InstructionTuning
Check out the full paper for more details and insights! #LLM #InstructionTuning #AI #ML #NLP arxiv.org/abs/2410.05248
Discover how tweaking prompt token weights impacts LLM performance in instruction tuning. Learn about the effects on next-token prediction and the role of Cross Entropy Loss in fine-tuning. #LLM #AI #InstructionTuning towardsdatascience.com/to-mask-or-not…
towardsdatascience.com
To Mask or Not to Mask: The Effect of Prompt Tokens on Instruction Tuning | Towards Data Science
Implementing prompt-loss-weight, and why we should replace prompt-masking with prompt-weighting
A key factor for ai research in mathematical reasoning is that it can further increase models' understanding and problem-solving capabilities for complex mathematical problems.... #Dataset #InfinityMath #instructiontuning #mathematical technicalterrence.com/tech/ai/infini…
technicalterrence.com
InfinityMath: A scalable instruction-tuning dataset for programmatic mathematical reasoning
A key factor for ai research in mathematical reasoning is that it can further increase models' understanding and problem-solving capabilities for complex m
Master AI with expert Instruction Tuning. Markovate optimizes data for precise, effective solutions, ensuring your AI models excel. #AITuning #InstructionTuning shorturl.at/7SYa3
A thought-provoking study sheds light on the inherent limitations of Instruction Tuning (IT) in the realm of conversational large language models (LLMs) and the implications on knowledge enhancement. #LargeLanguageModels #InstructionTuning #KnowledgeEnhancement
Streamline by removing the constitution prompt, leaving only instructions. Results in a dataset of 250k instruction-response pairs! Finally, this dataset trains a LORA model, which leads to the dev of the instruction-tuned Dromedary 🐪. #InstructionTuning #Dromedary (3/4)🧵
A thought-provoking study sheds light on the inherent limitations of Instruction Tuning (IT) in the realm of conversational large language models (LLMs) and the implications on knowledge enhancement. #LargeLanguageModels #InstructionTuning #KnowledgeEnhancement
ローカルのGeForce RTX4070でInstruction Tuning開始した。学習完了に何時間かかることやら、、 取り敢えず、PCのスリープモードを解除。 明日の朝、メモリが匙投げていなことを祈る。 気持ちよく仕事始めさせてー #InstructionTuning #LLM #CUDA #torch
Enhancing Instruction Tuning in LLMs: A Diversity-Aware Data Selection Strategy Using Sparse Autoencoders #InstructionTuning #DataDiversity #SparseAutoencoders #AIResearch #MachineLearning itinai.com/enhancing-inst…
🚀New paper at #ACL2025 Findings! Instruction-Tuning Data Synthesis from Scratch via Web Reconstruction We propose WebR, a fully automated framework that turns raw web docs into high-quality instruction-tuning data — no seed data, minimal assumptions! #LLM #InstructionTuning
Finally got to play with #dalle3 a bit. Super fun! Great quality, but still doesn't follow instructions faithfully. But maybe image models do not need to be as instruction-tuned as language models to be useful? Wdyt? Full post: maximalmargin.com/image_if/ #instructiontuning
Finally got my hands on #dalle3 and did my own 'a horse riding an astronaut' test -- using Sol LeWitt's instruction-based art as a framework. Here are my findings: 1/🧵
Something went wrong.
Something went wrong.
United States Trends
- 1. Good Monday 34K posts
- 2. Pond 187K posts
- 3. #MondayMotivation 34.5K posts
- 4. #Talus_Labs N/A
- 5. Happy 250th 2,901 posts
- 6. Semper Fi 4,345 posts
- 7. Rudy Giuliani 21.1K posts
- 8. United States Marine Corps 4,574 posts
- 9. #LingHerHynessTiktokLive 335K posts
- 10. LINGLING BA HERHYNESS 320K posts
- 11. #MondayVibes 2,353 posts
- 12. Victory Monday 1,091 posts
- 13. The BBC 449K posts
- 14. 8 Democrats 12.8K posts
- 15. #USMC N/A
- 16. Mark Meadows 19.5K posts
- 17. Tina Peters 8,946 posts
- 18. Talus Labs’ AR N/A
- 19. Happy New Week 49.3K posts
- 20. Devil Dogs 1,325 posts