#knowledgeediting 搜尋結果

The In-context Learning Editing (IKE) method proposed in this paper showcases impressive performance, even in complex multi-hop related scenarios. #knowledgeediting

qx_dong's tweet image. The In-context Learning Editing (IKE) method proposed in this paper showcases impressive performance, even in complex multi-hop related scenarios.  #knowledgeediting

Welcome to attend the tutorial on "Knowledge Editing for Large Language Models" at IJCAI 2024 @IJCAIconf . Here are the details #NLP #AI #KnowledgeEditing #LLMs : Tutorial: Knowledge Editing for Large Language Models Location: Room 1F-Yeongju B Time: 9:00-12:30, August 3, 2024…

zxlzr's tweet image. Welcome to attend the tutorial on "Knowledge Editing for Large Language Models" at IJCAI 2024 @IJCAIconf . Here are the details #NLP #AI #KnowledgeEditing #LLMs :

Tutorial: Knowledge Editing for Large Language Models

Location: Room 1F-Yeongju B

Time: 9:00-12:30, August 3, 2024…

Over the past year, #KnowledgeEditing has experienced rapid development. As the new year begins, I’ve taken some time to reflect on the progress of this field and share my thoughts on its future directions. I look forward to discussing and collaborating with everyone to further…

zxlzr's tweet image. Over the past year, #KnowledgeEditing has experienced rapid development. As the new year begins, I’ve taken some time to reflect on the progress of this field and share my thoughts on its future directions. I look forward to discussing and collaborating with everyone to further…

Our EasyEdit now supports editing multi-modal LLMs. Currently, you can edit MiniGPT-4 and Blip2. Check out the details here: github.com/zjunlp/EasyEdi…. Feel free to follow us for updates. Related paper: arxiv.org/abs/2310.08475. 🚀📚 #EasyEdit #ModelEditing #KnowledgeEditing

zxlzr's tweet image. Our EasyEdit now supports editing multi-modal LLMs. Currently, you can edit MiniGPT-4 and Blip2. Check out the details here: github.com/zjunlp/EasyEdi…. Feel free to follow us for updates.

Related paper: arxiv.org/abs/2310.08475. 🚀📚 #EasyEdit #ModelEditing #KnowledgeEditing…

The tutorial of Knowledge Editing for Large Language Models at LREC-COLING 2024 @LrecColing is in progress. #NLP #LLMs #KnowledgeEditing

zxlzr's tweet image. The tutorial of Knowledge Editing for Large Language Models at LREC-COLING 2024 @LrecColing is in progress. #NLP #LLMs #KnowledgeEditing

Weight editors like ROME & MEMIT can overwrite facts, and ELDER stacks LoRA adapters for lifelong tweaks—but “ripple effects” still flip 20-30 % of related answers. #KnowledgeEditing

longevityboris's tweet image. Weight editors like ROME & MEMIT can overwrite facts, and ELDER stacks LoRA adapters for lifelong tweaks—but “ripple effects” still flip 20-30 % of related answers.
#KnowledgeEditing

Very glad that our work EasyEdit2 has been accepted to EMNLP 2025 Demonstration @emnlpmeeting , and looking forward to exchanging ideas together in Suzhou #EMNLP2025 #NLP #KnowledgeEditing #LLM #Steering ! Github: github.com/zjunlp/EasyEdi… Paper: arxiv.org/abs/2504.15133

🚀 Excited to introduce EasyEdit2 — a powerful upgrade to EasyEdit, now redesigned for unified, plug-and-play LLM behavior steering at inference time! #EasyEdit #LLM #ModelSteering #ModelEditing #KnowledgeEditing #EasyEdit2 #AI #InferenceTimeControl ✨ No retraining — just…

zxlzr's tweet image. 🚀 Excited to introduce EasyEdit2 — a powerful upgrade to EasyEdit, now redesigned for unified, plug-and-play LLM behavior steering at inference time! #EasyEdit #LLM #ModelSteering #ModelEditing #KnowledgeEditing #EasyEdit2 #AI #InferenceTimeControl

✨ No retraining — just…
zxlzr's tweet image. 🚀 Excited to introduce EasyEdit2 — a powerful upgrade to EasyEdit, now redesigned for unified, plug-and-play LLM behavior steering at inference time! #EasyEdit #LLM #ModelSteering #ModelEditing #KnowledgeEditing #EasyEdit2 #AI #InferenceTimeControl

✨ No retraining — just…
zxlzr's tweet image. 🚀 Excited to introduce EasyEdit2 — a powerful upgrade to EasyEdit, now redesigned for unified, plug-and-play LLM behavior steering at inference time! #EasyEdit #LLM #ModelSteering #ModelEditing #KnowledgeEditing #EasyEdit2 #AI #InferenceTimeControl

✨ No retraining — just…
zxlzr's tweet image. 🚀 Excited to introduce EasyEdit2 — a powerful upgrade to EasyEdit, now redesigned for unified, plug-and-play LLM behavior steering at inference time! #EasyEdit #LLM #ModelSteering #ModelEditing #KnowledgeEditing #EasyEdit2 #AI #InferenceTimeControl

✨ No retraining — just…


🔔Be careful when you use MQuAKE-3k to evaluate Knowledge Editing of LLMs. ❗ One-third of labels may not work due to knowledge conflicts. 💡 🔍Consider using our MQuAKE-2002 and MQuAKE-hard for more precise evaluations: wangywust.github.io/deepedit.io/ #LLM #KnowledgeEditing

wangyiw33973985's tweet image. 🔔Be careful when you use MQuAKE-3k to evaluate Knowledge Editing of LLMs. ❗ One-third of labels may not work due to knowledge conflicts. 💡

🔍Consider using our MQuAKE-2002 and MQuAKE-hard for more precise evaluations: wangywust.github.io/deepedit.io/ 

#LLM #KnowledgeEditing

Our new paper proposes a new knowledge editing method for LLMs. 🚀 No retraining or prompt engineering, you only need our decoding method to achieve knowledge editing. 🎯 Feel free to check it at: wangywust.github.io/deepedit.io/ #KnowledgeEditing #LLMs #DecodingMethod #nlp2024

wangyiw33973985's tweet image. Our new paper proposes a new knowledge editing method for LLMs. 🚀

No retraining or prompt engineering, you only need our decoding method to achieve knowledge editing. 🎯

Feel free to check it at: wangywust.github.io/deepedit.io/

#KnowledgeEditing #LLMs #DecodingMethod #nlp2024

Explore an advanced multi-hop question-answering framework that updates real-time knowledge in LLMs through retrieval-augmented editing. #LLMs #KnowledgeEditing #QuestionAnswering

GoatstackAI's tweet image. Explore an advanced multi-hop question-answering framework that updates real-time knowledge in LLMs through retrieval-augmented editing. #LLMs #KnowledgeEditing #QuestionAnswering

New benchmarks for evaluating knowledge editing methods of LLMs. Precise evaluations through our benchmarks MQuAKE-2002 and MQuAKE-hard. 👮 See we remove the annotation mistakes due to knowledge conflicts at: wangywust.github.io/deepedit.io/ #KnowledgeEditing #LLMs #nlp2024

wangyiw33973985's tweet image. New benchmarks for evaluating knowledge editing methods of LLMs. Precise evaluations through our benchmarks MQuAKE-2002 and MQuAKE-hard. 👮 
See we remove the annotation mistakes due to knowledge conflicts at: wangywust.github.io/deepedit.io/
#KnowledgeEditing #LLMs #nlp2024

🧠 Revolutionary: Edit LLM knowledge in real-time! EasyEdit lets you update facts, remove biases & insert knowledge WITHOUT retraining. UltraEdit processes 20K edits in 5 minutes! ACL 2024 published, 2.5k stars ⭐ github.com/zjunlp/EasyEdit #AI #LLM #KnowledgeEditing


We need more practical evaluation for #modelEditing #KnowledgeEditing 🤔 #LLM #AI #ACL2025

😯To assess the real-world effectiveness of model editing techniques, we evaluated them on practical QA tasks and found that current editing methods perform substantially worse than previously reported (38.5% vs. 96%).

10k_miles_yang's tweet image. 😯To assess the real-world effectiveness of model editing techniques, we evaluated them on practical QA tasks and found that current editing methods perform substantially worse than previously reported (38.5% vs. 96%).


Just dropped a quick update: EasyEdit now supports using the official fine-tuning API of gpt-3.5-turbo to customize ChatGPT for your editing cases. Try it out! 🔥📝 #EasyEdit #ChatGPT #KnowledgeEditing #NLP #AI #LLM Github: github.com/zjunlp/EasyEdit ChatGPT FT Setting:…


Want LLMs that *actually* learn from knowledge updates & don't just memorize? 🤔 Check out "CaKE" - Circuit-aware Knowledge Editing! It guides models to *reason* with new info, not just parrot it back. ➡️ [link to paper] #AI #KnowledgeEditing


This is a systematic study on technical AGI safety and security. Interpretability techniques like steering vectors and circuit analysis can help us understand and improve LLM safety—but they can also be misused. #Safety #ModelEditing #KnowledgeEditing #LLM #NLP

Excited to share @GoogleDeepMind's AGI safety and security strategy to tackle risks like misuse and misalignment. Rather than high-level principles, this 145-page paper outlines a concrete, defense-in-depth technical approach: proactively evaluating & restricting dangerous…

sebkrier's tweet image. Excited to share @GoogleDeepMind's AGI safety and security strategy to tackle risks like misuse and misalignment. Rather than high-level principles, this 145-page paper outlines a concrete, defense-in-depth technical approach: proactively evaluating & restricting dangerous…


Impressive summary and outlook of #KnowledgeEditing 😇

Over the past year, #KnowledgeEditing has experienced rapid development. As the new year begins, I’ve taken some time to reflect on the progress of this field and share my thoughts on its future directions. I look forward to discussing and collaborating with everyone to further…

zxlzr's tweet image. Over the past year, #KnowledgeEditing has experienced rapid development. As the new year begins, I’ve taken some time to reflect on the progress of this field and share my thoughts on its future directions. I look forward to discussing and collaborating with everyone to further…


📜Join us on this journey to revolutionize the way we keep our language models relevant and up-to-date. Your input and insights are invaluable! 🙌 #KnowledgeEditing #LargeLanguageModels #Survey #AI #NLP #Innovation


🧠 Revolutionary: Edit LLM knowledge in real-time! EasyEdit lets you update facts, remove biases & insert knowledge WITHOUT retraining. UltraEdit processes 20K edits in 5 minutes! ACL 2024 published, 2.5k stars ⭐ github.com/zjunlp/EasyEdit #AI #LLM #KnowledgeEditing


Weight editors like ROME & MEMIT can overwrite facts, and ELDER stacks LoRA adapters for lifelong tweaks—but “ripple effects” still flip 20-30 % of related answers. #KnowledgeEditing

longevityboris's tweet image. Weight editors like ROME & MEMIT can overwrite facts, and ELDER stacks LoRA adapters for lifelong tweaks—but “ripple effects” still flip 20-30 % of related answers.
#KnowledgeEditing

This is a systematic study on technical AGI safety and security. Interpretability techniques like steering vectors and circuit analysis can help us understand and improve LLM safety—but they can also be misused. #Safety #ModelEditing #KnowledgeEditing #LLM #NLP

Excited to share @GoogleDeepMind's AGI safety and security strategy to tackle risks like misuse and misalignment. Rather than high-level principles, this 145-page paper outlines a concrete, defense-in-depth technical approach: proactively evaluating & restricting dangerous…

sebkrier's tweet image. Excited to share @GoogleDeepMind's AGI safety and security strategy to tackle risks like misuse and misalignment. Rather than high-level principles, this 145-page paper outlines a concrete, defense-in-depth technical approach: proactively evaluating & restricting dangerous…


Want LLMs that *actually* learn from knowledge updates & don't just memorize? 🤔 Check out "CaKE" - Circuit-aware Knowledge Editing! It guides models to *reason* with new info, not just parrot it back. ➡️ [link to paper] #AI #KnowledgeEditing


Impressive summary and outlook of #KnowledgeEditing 😇

Over the past year, #KnowledgeEditing has experienced rapid development. As the new year begins, I’ve taken some time to reflect on the progress of this field and share my thoughts on its future directions. I look forward to discussing and collaborating with everyone to further…

zxlzr's tweet image. Over the past year, #KnowledgeEditing has experienced rapid development. As the new year begins, I’ve taken some time to reflect on the progress of this field and share my thoughts on its future directions. I look forward to discussing and collaborating with everyone to further…


Over the past year, #KnowledgeEditing has experienced rapid development. As the new year begins, I’ve taken some time to reflect on the progress of this field and share my thoughts on its future directions. I look forward to discussing and collaborating with everyone to further…

zxlzr's tweet image. Over the past year, #KnowledgeEditing has experienced rapid development. As the new year begins, I’ve taken some time to reflect on the progress of this field and share my thoughts on its future directions. I look forward to discussing and collaborating with everyone to further…

Results: State-of-the-art performance across datasets! Big shoutout to my amazing students @AmitRozner and Barak Battas for their amazing work! Join us and check out the paper here 👉 arxiv.org/pdf/2406.09920 #EMNLP #AIResearch #KnowledgeEditing #MachineLearning #NLP


🧠 Discover the latest in enhancing Large Language Models with innovative Knowledge Editing Techniques by Mike Young. Learn how KME can update models efficiently without losing valuable knowledge. #AI #NLP #KnowledgeEditing 📚 ift.tt/cjyp5d0

dev.to

Enhancing Large Language Models: A Survey of Knowledge Editing Techniques

Enhancing Large Language Models: A Survey of Knowledge Editing Techniques


Introducing OneEdit: A groundbreaking neural-symbolic system offering seamless integration and conflict resolution in knowledge graphs and large language models. Read the full blog post at: ift.tt/rThS4DA #AI #KnowledgeEditing #NeuralSymbolic


Welcome to attend the tutorial on "Knowledge Editing for Large Language Models" at IJCAI 2024 @IJCAIconf . Here are the details #NLP #AI #KnowledgeEditing #LLMs : Tutorial: Knowledge Editing for Large Language Models Location: Room 1F-Yeongju B Time: 9:00-12:30, August 3, 2024…

zxlzr's tweet image. Welcome to attend the tutorial on "Knowledge Editing for Large Language Models" at IJCAI 2024 @IJCAIconf . Here are the details #NLP #AI #KnowledgeEditing #LLMs :

Tutorial: Knowledge Editing for Large Language Models

Location: Room 1F-Yeongju B

Time: 9:00-12:30, August 3, 2024…

未找到 "#knowledgeediting" 的結果

The In-context Learning Editing (IKE) method proposed in this paper showcases impressive performance, even in complex multi-hop related scenarios. #knowledgeediting

qx_dong's tweet image. The In-context Learning Editing (IKE) method proposed in this paper showcases impressive performance, even in complex multi-hop related scenarios.  #knowledgeediting

Welcome to attend the tutorial on "Knowledge Editing for Large Language Models" at IJCAI 2024 @IJCAIconf . Here are the details #NLP #AI #KnowledgeEditing #LLMs : Tutorial: Knowledge Editing for Large Language Models Location: Room 1F-Yeongju B Time: 9:00-12:30, August 3, 2024…

zxlzr's tweet image. Welcome to attend the tutorial on "Knowledge Editing for Large Language Models" at IJCAI 2024 @IJCAIconf . Here are the details #NLP #AI #KnowledgeEditing #LLMs :

Tutorial: Knowledge Editing for Large Language Models

Location: Room 1F-Yeongju B

Time: 9:00-12:30, August 3, 2024…

The tutorial of Knowledge Editing for Large Language Models at LREC-COLING 2024 @LrecColing is in progress. #NLP #LLMs #KnowledgeEditing

zxlzr's tweet image. The tutorial of Knowledge Editing for Large Language Models at LREC-COLING 2024 @LrecColing is in progress. #NLP #LLMs #KnowledgeEditing

Our EasyEdit now supports editing multi-modal LLMs. Currently, you can edit MiniGPT-4 and Blip2. Check out the details here: github.com/zjunlp/EasyEdi…. Feel free to follow us for updates. Related paper: arxiv.org/abs/2310.08475. 🚀📚 #EasyEdit #ModelEditing #KnowledgeEditing

zxlzr's tweet image. Our EasyEdit now supports editing multi-modal LLMs. Currently, you can edit MiniGPT-4 and Blip2. Check out the details here: github.com/zjunlp/EasyEdi…. Feel free to follow us for updates.

Related paper: arxiv.org/abs/2310.08475. 🚀📚 #EasyEdit #ModelEditing #KnowledgeEditing…

Over the past year, #KnowledgeEditing has experienced rapid development. As the new year begins, I’ve taken some time to reflect on the progress of this field and share my thoughts on its future directions. I look forward to discussing and collaborating with everyone to further…

zxlzr's tweet image. Over the past year, #KnowledgeEditing has experienced rapid development. As the new year begins, I’ve taken some time to reflect on the progress of this field and share my thoughts on its future directions. I look forward to discussing and collaborating with everyone to further…

Weight editors like ROME & MEMIT can overwrite facts, and ELDER stacks LoRA adapters for lifelong tweaks—but “ripple effects” still flip 20-30 % of related answers. #KnowledgeEditing

longevityboris's tweet image. Weight editors like ROME & MEMIT can overwrite facts, and ELDER stacks LoRA adapters for lifelong tweaks—but “ripple effects” still flip 20-30 % of related answers.
#KnowledgeEditing

Explore an advanced multi-hop question-answering framework that updates real-time knowledge in LLMs through retrieval-augmented editing. #LLMs #KnowledgeEditing #QuestionAnswering

GoatstackAI's tweet image. Explore an advanced multi-hop question-answering framework that updates real-time knowledge in LLMs through retrieval-augmented editing. #LLMs #KnowledgeEditing #QuestionAnswering

🔔Be careful when you use MQuAKE-3k to evaluate Knowledge Editing of LLMs. ❗ One-third of labels may not work due to knowledge conflicts. 💡 🔍Consider using our MQuAKE-2002 and MQuAKE-hard for more precise evaluations: wangywust.github.io/deepedit.io/ #LLM #KnowledgeEditing

wangyiw33973985's tweet image. 🔔Be careful when you use MQuAKE-3k to evaluate Knowledge Editing of LLMs. ❗ One-third of labels may not work due to knowledge conflicts. 💡

🔍Consider using our MQuAKE-2002 and MQuAKE-hard for more precise evaluations: wangywust.github.io/deepedit.io/ 

#LLM #KnowledgeEditing

Our new paper proposes a new knowledge editing method for LLMs. 🚀 No retraining or prompt engineering, you only need our decoding method to achieve knowledge editing. 🎯 Feel free to check it at: wangywust.github.io/deepedit.io/ #KnowledgeEditing #LLMs #DecodingMethod #nlp2024

wangyiw33973985's tweet image. Our new paper proposes a new knowledge editing method for LLMs. 🚀

No retraining or prompt engineering, you only need our decoding method to achieve knowledge editing. 🎯

Feel free to check it at: wangywust.github.io/deepedit.io/

#KnowledgeEditing #LLMs #DecodingMethod #nlp2024

New benchmarks for evaluating knowledge editing methods of LLMs. Precise evaluations through our benchmarks MQuAKE-2002 and MQuAKE-hard. 👮 See we remove the annotation mistakes due to knowledge conflicts at: wangywust.github.io/deepedit.io/ #KnowledgeEditing #LLMs #nlp2024

wangyiw33973985's tweet image. New benchmarks for evaluating knowledge editing methods of LLMs. Precise evaluations through our benchmarks MQuAKE-2002 and MQuAKE-hard. 👮 
See we remove the annotation mistakes due to knowledge conflicts at: wangywust.github.io/deepedit.io/
#KnowledgeEditing #LLMs #nlp2024

Loading...

Something went wrong.


Something went wrong.


United States Trends