#knowledgeediting 搜尋結果
The In-context Learning Editing (IKE) method proposed in this paper showcases impressive performance, even in complex multi-hop related scenarios. #knowledgeediting
Welcome to attend the tutorial on "Knowledge Editing for Large Language Models" at IJCAI 2024 @IJCAIconf . Here are the details #NLP #AI #KnowledgeEditing #LLMs : Tutorial: Knowledge Editing for Large Language Models Location: Room 1F-Yeongju B Time: 9:00-12:30, August 3, 2024…
Over the past year, #KnowledgeEditing has experienced rapid development. As the new year begins, I’ve taken some time to reflect on the progress of this field and share my thoughts on its future directions. I look forward to discussing and collaborating with everyone to further…
Our EasyEdit now supports editing multi-modal LLMs. Currently, you can edit MiniGPT-4 and Blip2. Check out the details here: github.com/zjunlp/EasyEdi…. Feel free to follow us for updates. Related paper: arxiv.org/abs/2310.08475. 🚀📚 #EasyEdit #ModelEditing #KnowledgeEditing…
The tutorial of Knowledge Editing for Large Language Models at LREC-COLING 2024 @LrecColing is in progress. #NLP #LLMs #KnowledgeEditing
Weight editors like ROME & MEMIT can overwrite facts, and ELDER stacks LoRA adapters for lifelong tweaks—but “ripple effects” still flip 20-30 % of related answers. #KnowledgeEditing
Very glad that our work EasyEdit2 has been accepted to EMNLP 2025 Demonstration @emnlpmeeting , and looking forward to exchanging ideas together in Suzhou #EMNLP2025 #NLP #KnowledgeEditing #LLM #Steering ! Github: github.com/zjunlp/EasyEdi… Paper: arxiv.org/abs/2504.15133
🚀 Excited to introduce EasyEdit2 — a powerful upgrade to EasyEdit, now redesigned for unified, plug-and-play LLM behavior steering at inference time! #EasyEdit #LLM #ModelSteering #ModelEditing #KnowledgeEditing #EasyEdit2 #AI #InferenceTimeControl ✨ No retraining — just…
🔔Be careful when you use MQuAKE-3k to evaluate Knowledge Editing of LLMs. ❗ One-third of labels may not work due to knowledge conflicts. 💡 🔍Consider using our MQuAKE-2002 and MQuAKE-hard for more precise evaluations: wangywust.github.io/deepedit.io/ #LLM #KnowledgeEditing
Our new paper proposes a new knowledge editing method for LLMs. 🚀 No retraining or prompt engineering, you only need our decoding method to achieve knowledge editing. 🎯 Feel free to check it at: wangywust.github.io/deepedit.io/ #KnowledgeEditing #LLMs #DecodingMethod #nlp2024
Explore an advanced multi-hop question-answering framework that updates real-time knowledge in LLMs through retrieval-augmented editing. #LLMs #KnowledgeEditing #QuestionAnswering
New benchmarks for evaluating knowledge editing methods of LLMs. Precise evaluations through our benchmarks MQuAKE-2002 and MQuAKE-hard. 👮 See we remove the annotation mistakes due to knowledge conflicts at: wangywust.github.io/deepedit.io/ #KnowledgeEditing #LLMs #nlp2024
🧠 Revolutionary: Edit LLM knowledge in real-time! EasyEdit lets you update facts, remove biases & insert knowledge WITHOUT retraining. UltraEdit processes 20K edits in 5 minutes! ACL 2024 published, 2.5k stars ⭐ github.com/zjunlp/EasyEdit #AI #LLM #KnowledgeEditing
We need more practical evaluation for #modelEditing #KnowledgeEditing 🤔 #LLM #AI #ACL2025
😯To assess the real-world effectiveness of model editing techniques, we evaluated them on practical QA tasks and found that current editing methods perform substantially worse than previously reported (38.5% vs. 96%).
Just dropped a quick update: EasyEdit now supports using the official fine-tuning API of gpt-3.5-turbo to customize ChatGPT for your editing cases. Try it out! 🔥📝 #EasyEdit #ChatGPT #KnowledgeEditing #NLP #AI #LLM Github: github.com/zjunlp/EasyEdit ChatGPT FT Setting:…
Want LLMs that *actually* learn from knowledge updates & don't just memorize? 🤔 Check out "CaKE" - Circuit-aware Knowledge Editing! It guides models to *reason* with new info, not just parrot it back. ➡️ [link to paper] #AI #KnowledgeEditing
This is a systematic study on technical AGI safety and security. Interpretability techniques like steering vectors and circuit analysis can help us understand and improve LLM safety—but they can also be misused. #Safety #ModelEditing #KnowledgeEditing #LLM #NLP
Excited to share @GoogleDeepMind's AGI safety and security strategy to tackle risks like misuse and misalignment. Rather than high-level principles, this 145-page paper outlines a concrete, defense-in-depth technical approach: proactively evaluating & restricting dangerous…
📍 Find us at ACL 2025 – Hall 5X, Poster #83 🌐 More details & resources: yangwl.site/revisit-editin… See you there! #ACL2025 #ModelEditing #KnowledgeEditing
yangwl.site
The Mirage of Model Editing: Revisiting Evaluation in the Wild
This paper reveals existing model editing evaluation adopts inappropriate strategies, such as teacher forcing during testing, which substantially overestimate the effectiveness of existing techniques.
🛑 Stop using teacher forcing to evaluate model editing! Our ACL 2025 poster shows why past evaluations mislead progress & how to test editing in the wild. 📍 July 30, 11:00 AM – come chat! #ModelEditing #LLM #ACL2025NLP
Impressive summary and outlook of #KnowledgeEditing 😇
Over the past year, #KnowledgeEditing has experienced rapid development. As the new year begins, I’ve taken some time to reflect on the progress of this field and share my thoughts on its future directions. I look forward to discussing and collaborating with everyone to further…
#KnowledgeEditing #NLP #LLMs #AI #ModelEditing The slides is available at: drive.google.com/file/d/1vFzRYj… More materials can be found at: github.com/zjunlp/Knowled… github.com/zjunlp/EasyEdit
The tutorial of Knowledge Editing for Large Language Models at LREC-COLING 2024 @LrecColing is in progress. #NLP #LLMs #KnowledgeEditing
📜Join us on this journey to revolutionize the way we keep our language models relevant and up-to-date. Your input and insights are invaluable! 🙌 #KnowledgeEditing #LargeLanguageModels #Survey #AI #NLP #Innovation
🧠 Revolutionary: Edit LLM knowledge in real-time! EasyEdit lets you update facts, remove biases & insert knowledge WITHOUT retraining. UltraEdit processes 20K edits in 5 minutes! ACL 2024 published, 2.5k stars ⭐ github.com/zjunlp/EasyEdit #AI #LLM #KnowledgeEditing
📍 Find us at ACL 2025 – Hall 5X, Poster #83 🌐 More details & resources: yangwl.site/revisit-editin… See you there! #ACL2025 #ModelEditing #KnowledgeEditing
yangwl.site
The Mirage of Model Editing: Revisiting Evaluation in the Wild
This paper reveals existing model editing evaluation adopts inappropriate strategies, such as teacher forcing during testing, which substantially overestimate the effectiveness of existing techniques.
🛑 Stop using teacher forcing to evaluate model editing! Our ACL 2025 poster shows why past evaluations mislead progress & how to test editing in the wild. 📍 July 30, 11:00 AM – come chat! #ModelEditing #LLM #ACL2025NLP
Weight editors like ROME & MEMIT can overwrite facts, and ELDER stacks LoRA adapters for lifelong tweaks—but “ripple effects” still flip 20-30 % of related answers. #KnowledgeEditing
This is a systematic study on technical AGI safety and security. Interpretability techniques like steering vectors and circuit analysis can help us understand and improve LLM safety—but they can also be misused. #Safety #ModelEditing #KnowledgeEditing #LLM #NLP
Excited to share @GoogleDeepMind's AGI safety and security strategy to tackle risks like misuse and misalignment. Rather than high-level principles, this 145-page paper outlines a concrete, defense-in-depth technical approach: proactively evaluating & restricting dangerous…
Want LLMs that *actually* learn from knowledge updates & don't just memorize? 🤔 Check out "CaKE" - Circuit-aware Knowledge Editing! It guides models to *reason* with new info, not just parrot it back. ➡️ [link to paper] #AI #KnowledgeEditing
Impressive summary and outlook of #KnowledgeEditing 😇
Over the past year, #KnowledgeEditing has experienced rapid development. As the new year begins, I’ve taken some time to reflect on the progress of this field and share my thoughts on its future directions. I look forward to discussing and collaborating with everyone to further…
Over the past year, #KnowledgeEditing has experienced rapid development. As the new year begins, I’ve taken some time to reflect on the progress of this field and share my thoughts on its future directions. I look forward to discussing and collaborating with everyone to further…
Results: State-of-the-art performance across datasets! Big shoutout to my amazing students @AmitRozner and Barak Battas for their amazing work! Join us and check out the paper here 👉 arxiv.org/pdf/2406.09920 #EMNLP #AIResearch #KnowledgeEditing #MachineLearning #NLP
🧠 Discover the latest in enhancing Large Language Models with innovative Knowledge Editing Techniques by Mike Young. Learn how KME can update models efficiently without losing valuable knowledge. #AI #NLP #KnowledgeEditing 📚 ift.tt/cjyp5d0
dev.to
Enhancing Large Language Models: A Survey of Knowledge Editing Techniques
Enhancing Large Language Models: A Survey of Knowledge Editing Techniques
Introducing OneEdit: A groundbreaking neural-symbolic system offering seamless integration and conflict resolution in knowledge graphs and large language models. Read the full blog post at: ift.tt/rThS4DA #AI #KnowledgeEditing #NeuralSymbolic
研修後に法学修士号の知識を編集すると、厄介な波及効果が生じる理由 #LLM #knowledgeediting #rippleeffects #GradSim prompthub.info/32956/
prompthub.info
研修後に法学修士号の知識を編集すると、厄介な波及効果が生じる理由 - プロンプトハブ
ChatGPTの登場以降、大規模言語モデル(LLMs)が広まり、多くのオンラインユーザーが日常的にアクセスして
研修後に法学修士号の知識を編集すると、厄介な波及効果が生じる理由 #KnowledgeEditing #RippleEffects #LanguageModels #ScienceX prompthub.info/32944/
prompthub.info
研修後に法学修士号の知識を編集すると、厄介な波及効果が生じる理由 - プロンプトハブ
ChatGPTの登場後、大規模な言語モデル(LLMs)が広まり、多くのオンラインユーザーがこれらを利用していま
Welcome to attend the tutorial on "Knowledge Editing for Large Language Models" at IJCAI 2024 @IJCAIconf . Here are the details #NLP #AI #KnowledgeEditing #LLMs : Tutorial: Knowledge Editing for Large Language Models Location: Room 1F-Yeongju B Time: 9:00-12:30, August 3, 2024…
#KnowledgeEditing #NLP #LLMs #AI #ModelEditing The slides is available at: drive.google.com/file/d/1vFzRYj… More materials can be found at: github.com/zjunlp/Knowled… github.com/zjunlp/EasyEdit
The tutorial of Knowledge Editing for Large Language Models at LREC-COLING 2024 @LrecColing is in progress. #NLP #LLMs #KnowledgeEditing
The In-context Learning Editing (IKE) method proposed in this paper showcases impressive performance, even in complex multi-hop related scenarios. #knowledgeediting
Welcome to attend the tutorial on "Knowledge Editing for Large Language Models" at IJCAI 2024 @IJCAIconf . Here are the details #NLP #AI #KnowledgeEditing #LLMs : Tutorial: Knowledge Editing for Large Language Models Location: Room 1F-Yeongju B Time: 9:00-12:30, August 3, 2024…
The tutorial of Knowledge Editing for Large Language Models at LREC-COLING 2024 @LrecColing is in progress. #NLP #LLMs #KnowledgeEditing
Our EasyEdit now supports editing multi-modal LLMs. Currently, you can edit MiniGPT-4 and Blip2. Check out the details here: github.com/zjunlp/EasyEdi…. Feel free to follow us for updates. Related paper: arxiv.org/abs/2310.08475. 🚀📚 #EasyEdit #ModelEditing #KnowledgeEditing…
Over the past year, #KnowledgeEditing has experienced rapid development. As the new year begins, I’ve taken some time to reflect on the progress of this field and share my thoughts on its future directions. I look forward to discussing and collaborating with everyone to further…
Weight editors like ROME & MEMIT can overwrite facts, and ELDER stacks LoRA adapters for lifelong tweaks—but “ripple effects” still flip 20-30 % of related answers. #KnowledgeEditing
Explore an advanced multi-hop question-answering framework that updates real-time knowledge in LLMs through retrieval-augmented editing. #LLMs #KnowledgeEditing #QuestionAnswering
🔔Be careful when you use MQuAKE-3k to evaluate Knowledge Editing of LLMs. ❗ One-third of labels may not work due to knowledge conflicts. 💡 🔍Consider using our MQuAKE-2002 and MQuAKE-hard for more precise evaluations: wangywust.github.io/deepedit.io/ #LLM #KnowledgeEditing
Our new paper proposes a new knowledge editing method for LLMs. 🚀 No retraining or prompt engineering, you only need our decoding method to achieve knowledge editing. 🎯 Feel free to check it at: wangywust.github.io/deepedit.io/ #KnowledgeEditing #LLMs #DecodingMethod #nlp2024
New benchmarks for evaluating knowledge editing methods of LLMs. Precise evaluations through our benchmarks MQuAKE-2002 and MQuAKE-hard. 👮 See we remove the annotation mistakes due to knowledge conflicts at: wangywust.github.io/deepedit.io/ #KnowledgeEditing #LLMs #nlp2024
Something went wrong.
Something went wrong.
United States Trends
- 1. $BNKK N/A
- 2. Good Monday 42.8K posts
- 3. #MondayMotivation 33.3K posts
- 4. #NXXTCall N/A
- 5. Victory Monday 1,752 posts
- 6. #ChaoVendeHumo 1,264 posts
- 7. #DestinyClinicxWilliamEst 413K posts
- 8. #MondayVibes 2,131 posts
- 9. $NXXT N/A
- 10. Guma 15.2K posts
- 11. House Republicans 34.2K posts
- 12. Goff 20.3K posts
- 13. New Week 202K posts
- 14. Sheikh Hasina 25.6K posts
- 15. Bondi 49K posts
- 16. Bangladesh 80.9K posts
- 17. Taxi 18.3K posts
- 18. Pharrell 4,472 posts
- 19. Tom Cruise 22.9K posts
- 20. Baker 18.4K posts