#adversarialml search results

(3rd chapter) Different types of Adversarial ML Attacks: - Perturbation - Membership inferencing - Model stealing - Deep NN Jailbreaking - Physical domain poisoning - Training Data Reconstruction - Poisoning attack - Model backdoor attack #machinelearning #adversarialML

bytebiscuit's tweet image. (3rd chapter)

Different types of Adversarial ML Attacks:
- Perturbation
- Membership inferencing
- Model stealing
- Deep NN Jailbreaking
- Physical domain poisoning 
- Training Data Reconstruction
- Poisoning attack
- Model backdoor attack

#machinelearning #adversarialML

[Adversarial ML] Learn and understand about Adversarial Machine Learning and Adversarial Threat Models. Link : blog.cyberwarfare.live/adversarial-at… Website : cyberwarfare.live Email : [email protected] #redteam #adversarialML #offensivemachinelearning #machinelearning

cyberwarfarelab's tweet image. [Adversarial ML] Learn and understand about Adversarial Machine Learning and Adversarial Threat Models. 

Link : blog.cyberwarfare.live/adversarial-at…

Website : cyberwarfare.live

Email : support@cyberwarfare.live

#redteam #adversarialML #offensivemachinelearning #machinelearning

Machine Learning Attack Series: Image Scaling Attacks embracethered.com/blog/posts/202… Hiding an image inside another. Hidden image becomes visible when server rescales it. 🤯 #machinelearning #adversarialml #offensiveml #redteaming #pentest #infosec #ml #aiml


Awesome #adversarialml research at poster session today at @icmlconf

ishabytes's tweet image. Awesome #adversarialml research at poster session today at @icmlconf
ishabytes's tweet image. Awesome #adversarialml research at poster session today at @icmlconf
ishabytes's tweet image. Awesome #adversarialml research at poster session today at @icmlconf

Nos vemos a las 6pm (hora Colombia) para hablar de seguridad en inteligencia artificial en el @DragonJARCon @icesi @Editorial_Icesi #MachineLearning #DataScience #adversarialml #BigData #infosec #hacker


Yep, our adversarial defense paper is accepted #ECCV! Context modelling is used to detect adversarial examples out of context, e.g., a speed limit sign (should be a stop sign) at a crossing road with stop line. Preprint and code coming soon.#adversarialML @laosong @zst_rising88

ShashaLi16's tweet image. Yep, our adversarial defense paper is accepted #ECCV! Context modelling is used to detect adversarial examples out of context, e.g., a speed limit sign (should be a stop sign) at a crossing road with stop line. Preprint and code coming soon.#adversarialML @laosong @zst_rising88

Got my first ECCV paper, on detecting adversarial perturbations using context-inconsistency, with our excellent students @ShashaLi16, @zst_rising88, and Sudipta.



Loved being back in Berlin for @WeAreDevs World Congress. Great to catch up with friends and chat with so many amazing people! 🎉 Recording of my talk “Confuse, Obfuscate, Disrupt” is out now: bit.ly/3I5OEJU #AdversarialML

davidvonthenen's tweet image. Loved being back in Berlin for @WeAreDevs  World Congress. Great to catch up with friends and chat with so many amazing people!

 🎉 Recording of my talk “Confuse, Obfuscate, Disrupt” is out now: bit.ly/3I5OEJU 

#AdversarialML
davidvonthenen's tweet image. Loved being back in Berlin for @WeAreDevs  World Congress. Great to catch up with friends and chat with so many amazing people!

 🎉 Recording of my talk “Confuse, Obfuscate, Disrupt” is out now: bit.ly/3I5OEJU 

#AdversarialML

New article! 📖 Discover SecML-Torch, an open-source Python library from @univca’s sAIfer Lab, designed to advance research in Adversarial Machine Learning (AML) and evaluate ML model robustness. 🔗 Read the full article: coevolution-project.eu/secml-torch-a-… #AdversarialML #Cybersecurity

coevolution_eu's tweet image. New article!
📖 Discover SecML-Torch, an open-source Python library from @univca’s sAIfer Lab, designed to advance research in Adversarial Machine Learning (AML) and evaluate ML model robustness.

🔗 Read the full article: coevolution-project.eu/secml-torch-a-…

#AdversarialML #Cybersecurity

Great technical talks this morning from speakers at @Facebook, @McAfee and @unboxresearch exploring the important steps we can take to building responsible, inclusive AI systems. #EGGSF2019 #datascience #adversarialML #MachineLearning #ML

dataiku's tweet image. Great technical talks this morning from speakers at @Facebook, @McAfee and @unboxresearch exploring the important steps we can take to building responsible, inclusive AI systems. #EGGSF2019 #datascience #adversarialML #MachineLearning #ML

Time to brag a little bit? ATHENA (softsys4ai.github.io/athena/) is going to become something! A big shot company is going to adopt and integrate potentially reaches million users! (All credits to Ying Meng and Jianhai Su, the two stellar students at AISys Lab!)😍😍 #adversarialML

PooyanJamshidi's tweet image. Time to brag a little bit? ATHENA (softsys4ai.github.io/athena/) is going to become something! A big shot company is going to adopt and integrate potentially reaches million users! (All credits to Ying Meng and Jianhai Su, the two stellar students at AISys Lab!)😍😍 #adversarialML
PooyanJamshidi's tweet image. Time to brag a little bit? ATHENA (softsys4ai.github.io/athena/) is going to become something! A big shot company is going to adopt and integrate potentially reaches million users! (All credits to Ying Meng and Jianhai Su, the two stellar students at AISys Lab!)😍😍 #adversarialML

Finished last slides for our #ECCV2018 half-day tutorial on #AdversarialML, to be held tomorrow! Sept. 8, TU Munich, Room N1179, Starting 8.30 am. eccv2018.org/program/worksh… Looking forward to some interesting discussions!

biggiobattista's tweet image. Finished last slides for our #ECCV2018 half-day tutorial on #AdversarialML, to be held tomorrow!
Sept. 8, TU Munich, Room N1179, Starting 8.30 am.

eccv2018.org/program/worksh…
Looking forward to some interesting discussions!

I'm at @ECMLPKDD in Dublin to give an invited talk at the IBM workshop Nemesis research.ibm.com/labs/ireland/n… on #AdversarialML, organized by @ririnicolae. Cool overview of recent attacks/defenses against deep learning algorithms so far!

biggiobattista's tweet image. I'm at @ECMLPKDD in Dublin to give an invited talk at the IBM workshop Nemesis research.ibm.com/labs/ireland/n… on #AdversarialML, organized by @ririnicolae. Cool overview of recent attacks/defenses against deep learning algorithms so far!
biggiobattista's tweet image. I'm at @ECMLPKDD in Dublin to give an invited talk at the IBM workshop Nemesis research.ibm.com/labs/ireland/n… on #AdversarialML, organized by @ririnicolae. Cool overview of recent attacks/defenses against deep learning algorithms so far!

Delighted to share that our paper(arxiv.org/pdf/2110.01823…) is accepted by #NeurIPS2021. Your video classifier is openly vulnerable to simple geometrically transformed perturbations! Thanks to the great team @aaich001 @zst_rising88 @laosong (and others not on here). #adversarialML

ShashaLi16's tweet image. Delighted to share that our paper(arxiv.org/pdf/2110.01823…) is accepted by #NeurIPS2021. Your video classifier is openly vulnerable to simple geometrically transformed perturbations! Thanks to the great team @aaich001 @zst_rising88 @laosong (and others not on here). #adversarialML
ShashaLi16's tweet image. Delighted to share that our paper(arxiv.org/pdf/2110.01823…) is accepted by #NeurIPS2021. Your video classifier is openly vulnerable to simple geometrically transformed perturbations! Thanks to the great team @aaich001 @zst_rising88 @laosong (and others not on here). #adversarialML

I was really excited and honored to give a talk about #tensors for #graphmining and #adversarialML at the One World Signal Processing seminar series yesterday! Special thanks to the organizers @XOsueecs @yanning_shen @hoitowai!!! #graphs #machinelearning #signalprocessing

vagelispapalex's tweet image. I was really excited and honored to give a talk about #tensors for #graphmining and #adversarialML at the One World Signal Processing seminar series yesterday!

Special thanks to the organizers @XOsueecs @yanning_shen @hoitowai!!!

#graphs #machinelearning #signalprocessing

How does an approximate multiplier design act as a defense against adversarial attacks? Defensive Approximation: Securing CNNs using Approximate Computing asplos-conference.org/abstracts/aspl… @AmiraGuesmi4 @ihstein @nael_ag #ASPLOS21 #AdversarialML #Approximation

ASPLOSConf's tweet image. How does an approximate multiplier design act as a defense against adversarial attacks?

Defensive Approximation: Securing CNNs using Approximate Computing

asplos-conference.org/abstracts/aspl…

@AmiraGuesmi4 @ihstein @nael_ag

#ASPLOS21 #AdversarialML #Approximation

We are presenting our tutorial 'Coevolutionary Computation for Adversarial Deep Learning' at #GECCO2024 @GeccoConf today! 🧠💻 Join us to dive into the intersection of #CoevolutionaryAlgorithms and #AdversarialML! #AIResearch @itis_uma @MIT_CSAIL @UnaMayMIT

jamtou's tweet image. We are presenting our tutorial 'Coevolutionary Computation for Adversarial Deep Learning' at #GECCO2024 @GeccoConf today! 🧠💻 Join us to dive into the intersection of #CoevolutionaryAlgorithms and #AdversarialML! #AIResearch @itis_uma @MIT_CSAIL @UnaMayMIT

(1/3) Adversarial Machine Learning safeguards #AI models against attacks. By understanding vulnerabilities and manipulating data inputs, robust defenses are built to ensure system integrity, reliability, and security. #TechSimplified #AdversarialML #AIsecurity

nasscom's tweet image. (1/3)
Adversarial Machine Learning safeguards #AI models against attacks. By understanding vulnerabilities and manipulating data inputs, robust defenses are built to ensure system integrity, reliability, and security. 
#TechSimplified #AdversarialML #AIsecurity

Adversarial AI attacks don’t just hack systems—they rewrite how AI sees reality. ➠ Poisoned loan algorithms that discriminate ➠ Drones fooled by invisible patterns ➠ Stolen facial recognition models Battlefield is math. #AdversarialML #MachineLearning buff.ly/8ZbtYmG

FMSepulveda's tweet image. Adversarial AI attacks don’t just hack systems—they rewrite how AI sees reality.
➠ Poisoned loan algorithms that discriminate
➠ Drones fooled by invisible patterns
➠ Stolen facial recognition models
Battlefield is math. #AdversarialML #MachineLearning 
buff.ly/8ZbtYmG

Hack The Box released an HTB Academy module on gradient-based evasion (FGSM, I-FGSM, DeepFool). Essential for MSSPs focused on detecting adversarial attacks and hardening client models. Start: okt.to/v70H5W #HackTheBox #MSSP #AdversarialML #ThreatHunting

HTBJill's tweet image. Hack The Box released an HTB Academy module on gradient-based evasion (FGSM, I-FGSM, DeepFool). Essential for MSSPs focused on detecting adversarial attacks and hardening client models. Start: okt.to/v70H5W

#HackTheBox #MSSP #AdversarialML #ThreatHunting

🙏 Huge thanks to @CienciaGob for supporting this work and to @UAHes for hosting the project. 🌍 If you work on AI security, adversarial ML, or trustworthy AI — let’s connect and collaborate! #TrustworthyAI #AISecurity #AdversarialML #CyberSecurity #MachineLearning


📢 We are excited to share that 4 new papers acknowledging #CoEvolution have been accepted for publication! 🎉 📝 Topics include adversarial robustness, federated learning & 3D perception. 👉 Explore them on our website: coevolution-project.eu/publications/ #AI #AdversarialML #EUFunded


New article! 📖 Discover SecML-Torch, an open-source Python library from @univca’s sAIfer Lab, designed to advance research in Adversarial Machine Learning (AML) and evaluate ML model robustness. 🔗 Read the full article: coevolution-project.eu/secml-torch-a-… #AdversarialML #Cybersecurity

coevolution_eu's tweet image. New article!
📖 Discover SecML-Torch, an open-source Python library from @univca’s sAIfer Lab, designed to advance research in Adversarial Machine Learning (AML) and evaluate ML model robustness.

🔗 Read the full article: coevolution-project.eu/secml-torch-a-…

#AdversarialML #Cybersecurity

Loved being back in Berlin for @WeAreDevs World Congress. Great to catch up with friends and chat with so many amazing people! 🎉 Recording of my talk “Confuse, Obfuscate, Disrupt” is out now: bit.ly/3I5OEJU #AdversarialML

davidvonthenen's tweet image. Loved being back in Berlin for @WeAreDevs  World Congress. Great to catch up with friends and chat with so many amazing people!

 🎉 Recording of my talk “Confuse, Obfuscate, Disrupt” is out now: bit.ly/3I5OEJU 

#AdversarialML
davidvonthenen's tweet image. Loved being back in Berlin for @WeAreDevs  World Congress. Great to catch up with friends and chat with so many amazing people!

 🎉 Recording of my talk “Confuse, Obfuscate, Disrupt” is out now: bit.ly/3I5OEJU 

#AdversarialML

AI’s greatest strength—its ability to learn—can also be its biggest weakness. Adversarial ML exploits this, turning tiny tweaks into massive threats. From crashing cars to tricking diagnoses, the risks are real. Time to fight back. #AIsecurity #AdversarialML #RobustAI


Adversarial ML is a wake-up call: attackers move fast—so must we. Building resilient AI means secure-by-design systems, constant vigilance, and adapting as threats evolve. The future of AI depends on trust. Let's defend it. #AIsecurity #AdversarialML #TrustworthyAI


NIST finalized a practical playbook #NIST to defend AI 🛡️ with a clear taxonomy of attacks #AdversarialML and an index mapping threats to mitigations for #Cybersecurity teams, giving shared language and faster action #AIsecurity Now the real work is yours as you update threat…

KryptonAi's tweet image. NIST finalized a practical playbook #NIST to defend AI 🛡️ with a clear taxonomy of attacks #AdversarialML and an index mapping threats to mitigations for #Cybersecurity teams, giving shared language and faster action #AIsecurity

Now the real work is yours as you update threat…

🚨 Adversarial machine learning can deceive autonomous drones ... from GPS spoofing to visual perturbations, slight alterations can trigger misclassification or hijacked navigation, threatening safety and defense systems. Stay alert: defensenews.com/global/europe/… #AdversarialML


Model evaluation will stop storytellers or scriptwriters to reevaluate or stop crafting adversarial illusions. Metrics don’t lie. Precision exposes the fiction. #AI #ModelEvaluation #AdversarialML #DataEthics


🚨 Deceptive Data Fusion: A Challenge for Real-World ML Ethics 🤖⚖️ 📁 Data_1: Emotionally skewed interaction logs 🧠 Data_2: Behavior spoofing & mimicry models 🔍 Data_3: 3+ years of third-party digital anomalies #AIethics #AdversarialML #DataPoisoning #MLAudit #DigitalRights


6/20 Master adversarial examples. These are inputs designed to fool ML models. A stop sign with strategic stickers can be classified as a speed limit sign. Tools like CleverHans and Foolbox are essential for this. #AdversarialML #ComputerVision


2/20Start with the fundamentals: understand how AI attacks actually work. Adversarial examples aren't just academic - they're real threats. Spend time on FGSM, PGD, and C&W attacks. Get your hands dirty with CleverHans or ART libraries #AdversarialML #AIDefense


Most orgs treat AI as a feature, not an attack surface. VerSprite’s new guide on AI Red Teaming shows how to simulate real-world attacks on ML pipelines, LLMs, and model supply chains. Read: versprite.com/blog/ai-red-te… #AIsecurity #RedTeam #AdversarialML


AI is transforming our world—but are we building it responsibly? From adversarial attacks to ethical dilemmas, the future of AI depends on how we integrate, defend, and govern it. #AI #ResponsibleAI #AdversarialML #Cybersecurity #TechPolicy

bel_margarita7's tweet image. AI is transforming our world—but are we building it responsibly?
From adversarial attacks to ethical dilemmas, the future of AI depends on how we integrate, defend, and govern it.

#AI #ResponsibleAI #AdversarialML #Cybersecurity #TechPolicy

Los ataques adversarios de IA no solo hackean sistemas: cambian su percepción de la realidad. ➠ Sistemas de crédito con sesgos ocultos ➠ Drones engañados por patrones invisibles ➠ Modelos de reconocimiento facial robados #MachineLearning #AdversarialML buff.ly/8ZbtYmG

RedTeamAILabs's tweet image. Los ataques adversarios de IA no solo hackean sistemas: cambian su percepción de la realidad.
➠ Sistemas de crédito con sesgos ocultos
➠ Drones engañados por patrones invisibles
➠ Modelos de reconocimiento facial robados
#MachineLearning #AdversarialML
buff.ly/8ZbtYmG

Los ataques adversarios de IA no solo hackean sistemas: cambian su percepción de la realidad. ➠ Sistemas de crédito con sesgos ocultos ➠ Drones engañados por patrones invisibles ➠ Modelos de reconocimiento facial robados #MachineLearning #AdversarialML buff.ly/8ZbtYmG

FMSepulveda's tweet image. Los ataques adversarios de IA no solo hackean sistemas: cambian su percepción de la realidad.
➠ Sistemas de crédito con sesgos ocultos
➠ Drones engañados por patrones invisibles
➠ Modelos de reconocimiento facial robados
#MachineLearning #AdversarialML
buff.ly/8ZbtYmG

Adversarial AI attacks don’t just hack systems—they rewrite how AI sees reality. ➠ Poisoned loan algorithms that discriminate ➠ Drones fooled by invisible patterns ➠ Stolen facial recognition models Battlefield is math. #AdversarialML #MachineLearning buff.ly/8ZbtYmG

FMSepulveda's tweet image. Adversarial AI attacks don’t just hack systems—they rewrite how AI sees reality.
➠ Poisoned loan algorithms that discriminate
➠ Drones fooled by invisible patterns
➠ Stolen facial recognition models
Battlefield is math. #AdversarialML #MachineLearning 
buff.ly/8ZbtYmG

Adversarial AI attacks don’t just hack systems—they rewrite how AI sees reality. ➠ Poisoned loan algorithms that discriminate ➠ Drones fooled by invisible patterns ➠ Stolen facial recognition models Battlefield is math. #AdversarialML #MachineLearning buff.ly/8ZbtYmG

RedTeamAILabs's tweet image. Adversarial AI attacks don’t just hack systems—they rewrite how AI sees reality.
➠ Poisoned loan algorithms that discriminate
➠ Drones fooled by invisible patterns
➠ Stolen facial recognition models
Battlefield is math. #AdversarialML #MachineLearning 
buff.ly/8ZbtYmG

"Order-Disorder: Imitation Adversarial Attacks for Black-box Neural Ranking Models", 1st April, led by @VictorKnox99 #CCS2022 #AdversarialML [9/N]

ponguru's tweet image. "Order-Disorder: Imitation Adversarial Attacks for Black-box Neural Ranking Models", 1st April, led by @VictorKnox99 #CCS2022 #AdversarialML [9/N]

@alidehghantanha speaking on #AI and #adversarialML and running #malware analysis workshop at the International Summer School on #ComputationalForensics (#SuCoFo2019)

RaymondChooAu's tweet image. @alidehghantanha speaking on #AI and #adversarialML and running #malware analysis workshop at the International Summer School on #ComputationalForensics (#SuCoFo2019)
RaymondChooAu's tweet image. @alidehghantanha speaking on #AI and #adversarialML and running #malware analysis workshop at the International Summer School on #ComputationalForensics (#SuCoFo2019)
RaymondChooAu's tweet image. @alidehghantanha speaking on #AI and #adversarialML and running #malware analysis workshop at the International Summer School on #ComputationalForensics (#SuCoFo2019)

[Adversarial ML] Learn and understand about Adversarial Machine Learning and Adversarial Threat Models. Link : blog.cyberwarfare.live/adversarial-at… Website : cyberwarfare.live Email : [email protected] #redteam #adversarialML #offensivemachinelearning #machinelearning

cyberwarfarelab's tweet image. [Adversarial ML] Learn and understand about Adversarial Machine Learning and Adversarial Threat Models. 

Link : blog.cyberwarfare.live/adversarial-at…

Website : cyberwarfare.live

Email : support@cyberwarfare.live

#redteam #adversarialML #offensivemachinelearning #machinelearning

Nos vemos a las 6pm (hora Colombia) para hablar de seguridad en inteligencia artificial en el @DragonJARCon @icesi @Editorial_Icesi #MachineLearning #DataScience #adversarialml #BigData #infosec #hacker


To identify parts of the sentence vulnerable to attack, we create a constituency parse tree and use a perplexity-difference based metric to gauge the phrase peculiarity at each node. We prioritise our attack on the top-N nodes with the most peculiar phrases. #AdversarialML [4/N]

ponguru's tweet image. To identify parts of the sentence vulnerable to attack, we create a constituency parse tree and use a perplexity-difference based metric to gauge the phrase peculiarity at each node. We prioritise our attack on the top-N nodes with the most peculiar phrases. #AdversarialML [4/N]
ponguru's tweet image. To identify parts of the sentence vulnerable to attack, we create a constituency parse tree and use a perplexity-difference based metric to gauge the phrase peculiarity at each node. We prioritise our attack on the top-N nodes with the most peculiar phrases. #AdversarialML [4/N]

Awesome #adversarialml research at poster session today at @icmlconf

ishabytes's tweet image. Awesome #adversarialml research at poster session today at @icmlconf
ishabytes's tweet image. Awesome #adversarialml research at poster session today at @icmlconf
ishabytes's tweet image. Awesome #adversarialml research at poster session today at @icmlconf

Hack The Box released an HTB Academy module on gradient-based evasion (FGSM, I-FGSM, DeepFool). Essential for MSSPs focused on detecting adversarial attacks and hardening client models. Start: okt.to/v70H5W #HackTheBox #MSSP #AdversarialML #ThreatHunting

HTBJill's tweet image. Hack The Box released an HTB Academy module on gradient-based evasion (FGSM, I-FGSM, DeepFool). Essential for MSSPs focused on detecting adversarial attacks and hardening client models. Start: okt.to/v70H5W

#HackTheBox #MSSP #AdversarialML #ThreatHunting

I asked blenderbot.ai if they would be aware of someone intentionally attacking them (or not) … interestingly they now think I’m both a student and a liar. One of those is true! #adversarialML #cybersecurity

j_whorley's tweet image. I asked blenderbot.ai if they would be aware of someone intentionally attacking them (or not) … interestingly they now think I’m both a student and a liar. One of those is true! #adversarialML #cybersecurity

Machine Learning Attack Series: Image Scaling Attacks embracethered.com/blog/posts/202… Hiding an image inside another. Hidden image becomes visible when server rescales it. 🤯 #machinelearning #adversarialml #offensiveml #redteaming #pentest #infosec #ml #aiml


We can live with a panda recognized as gibbon, but what if a stop sign is made invisible to computer vision algorithms by adding small stickers? #adversarialML

DomeFiora's tweet image. We can live with a panda recognized as gibbon, but what if a stop sign is made invisible to computer vision algorithms by adding small stickers? #adversarialML
DomeFiora's tweet image. We can live with a panda recognized as gibbon, but what if a stop sign is made invisible to computer vision algorithms by adding small stickers? #adversarialML

Great technical talks this morning from speakers at @Facebook, @McAfee and @unboxresearch exploring the important steps we can take to building responsible, inclusive AI systems. #EGGSF2019 #datascience #adversarialML #MachineLearning #ML

dataiku's tweet image. Great technical talks this morning from speakers at @Facebook, @McAfee and @unboxresearch exploring the important steps we can take to building responsible, inclusive AI systems. #EGGSF2019 #datascience #adversarialML #MachineLearning #ML

(3rd chapter) Different types of Adversarial ML Attacks: - Perturbation - Membership inferencing - Model stealing - Deep NN Jailbreaking - Physical domain poisoning - Training Data Reconstruction - Poisoning attack - Model backdoor attack #machinelearning #adversarialML

bytebiscuit's tweet image. (3rd chapter)

Different types of Adversarial ML Attacks:
- Perturbation
- Membership inferencing
- Model stealing
- Deep NN Jailbreaking
- Physical domain poisoning 
- Training Data Reconstruction
- Poisoning attack
- Model backdoor attack

#machinelearning #adversarialML

(1/3) Adversarial Machine Learning safeguards #AI models against attacks. By understanding vulnerabilities and manipulating data inputs, robust defenses are built to ensure system integrity, reliability, and security. #TechSimplified #AdversarialML #AIsecurity

nasscom's tweet image. (1/3)
Adversarial Machine Learning safeguards #AI models against attacks. By understanding vulnerabilities and manipulating data inputs, robust defenses are built to ensure system integrity, reliability, and security. 
#TechSimplified #AdversarialML #AIsecurity

I was really excited and honored to give a talk about #tensors for #graphmining and #adversarialML at the One World Signal Processing seminar series yesterday! Special thanks to the organizers @XOsueecs @yanning_shen @hoitowai!!! #graphs #machinelearning #signalprocessing

vagelispapalex's tweet image. I was really excited and honored to give a talk about #tensors for #graphmining and #adversarialML at the One World Signal Processing seminar series yesterday!

Special thanks to the organizers @XOsueecs @yanning_shen @hoitowai!!!

#graphs #machinelearning #signalprocessing

How does an approximate multiplier design act as a defense against adversarial attacks? Defensive Approximation: Securing CNNs using Approximate Computing asplos-conference.org/abstracts/aspl… @AmiraGuesmi4 @ihstein @nael_ag #ASPLOS21 #AdversarialML #Approximation

ASPLOSConf's tweet image. How does an approximate multiplier design act as a defense against adversarial attacks?

Defensive Approximation: Securing CNNs using Approximate Computing

asplos-conference.org/abstracts/aspl…

@AmiraGuesmi4 @ihstein @nael_ag

#ASPLOS21 #AdversarialML #Approximation

Slides for "The History of Adversarial AI" are online! See the quick summary of the past 10 years in AI vulnerability research. Get PDF from the #HITB2021AMS website: conference.hitb.org/hitbsecconf202… #AdversarialML #SecureAI #TrustworthyAI #ResponsibleAI

eneelou's tweet image. Slides for "The History of Adversarial AI" are online! See the quick summary of the past 10 years in AI vulnerability research.

Get PDF from the #HITB2021AMS website: conference.hitb.org/hitbsecconf202…

#AdversarialML #SecureAI #TrustworthyAI #ResponsibleAI

@Bushra_Sabir led an insightful discussion on "#AdversarialML in #Cybersecurity" #crest_discussion. We discussed the key #whitebox, #greybox & #blackbox attacks & defense mechanisms in various #AI-enabled #security tasks such as #malware, #phishing, #spam, #intrusion detection.

crest_centre's tweet image. @Bushra_Sabir led an insightful discussion on "#AdversarialML in #Cybersecurity" #crest_discussion. We discussed the key #whitebox, #greybox & #blackbox attacks & defense mechanisms in various #AI-enabled #security tasks such as #malware, #phishing, #spam, #intrusion detection.

Yep, our adversarial defense paper is accepted #ECCV! Context modelling is used to detect adversarial examples out of context, e.g., a speed limit sign (should be a stop sign) at a crossing road with stop line. Preprint and code coming soon.#adversarialML @laosong @zst_rising88

ShashaLi16's tweet image. Yep, our adversarial defense paper is accepted #ECCV! Context modelling is used to detect adversarial examples out of context, e.g., a speed limit sign (should be a stop sign) at a crossing road with stop line. Preprint and code coming soon.#adversarialML @laosong @zst_rising88

Got my first ECCV paper, on detecting adversarial perturbations using context-inconsistency, with our excellent students @ShashaLi16, @zst_rising88, and Sudipta.



Loading...

Something went wrong.


Something went wrong.


United States Trends