dataengines's profile picture. Working on making the world safer for robots and humans.
Holders of the world's 1st patent (2010) on distribution-free evaluation of noisy judges.

Data Engines

@dataengines

Working on making the world safer for robots and humans. Holders of the world's 1st patent (2010) on distribution-free evaluation of noisy judges.

Data Engines reposted

Listen to Professor Mike Wooldridge (@wooldridgemike) on the @newscientist discussing how anxieties around AI distract us from the more immediate risks that the technology poses such as algorithmic bias & fake news. Listen here: institutions.newscientist.com/video/2422044-… #compscioxford #OxfordAI


Data Engines reposted

This Black History Month, we celebrate Deborah Raji (@rajiinio), a cognitive scientist, AI researcher and Mozilla Fellow who collaborated with our founder @jovialjoy at the MIT Media Lab and AJL to audit commercial facial recognition technologies from Microsoft, Amazon, IBM, and…

AJLUnited's tweet image. This Black History Month, we celebrate Deborah Raji (@rajiinio), a cognitive scientist, AI researcher and Mozilla Fellow who collaborated with our founder @jovialjoy at the MIT Media Lab and AJL to audit commercial facial recognition technologies from Microsoft, Amazon, IBM, and…

Data Engines reposted

TODAY: Our Director of Responsible AI Practice @ccansu will discuss #ResponsibleAI, defining goals for #AI projects and how to find expert #AIguidance with @eric_kavanagh on Inside Analysis at 3 p.m. EDT. Register for free! bit.ly/3UDESTe #RAI #AIwebinar


Data Engines reposted

As our celebration of Black History Month continues, we're shining a light on @timnitGebru, the co-founder of @black_in_ai and the founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR). Her groundbreaking research on algorithmic…

AJLUnited's tweet image. As our celebration of Black History Month continues, we're shining a light on @timnitGebru, the co-founder of @black_in_ai and the founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR). 

Her groundbreaking research on algorithmic…

Data Engines reposted

"The moment he told them he's going to join us, they quadrupled his offer" - Perplexity CEO @AravSrinivas on recruiting from Google (k, here's the video)


Data Engines reposted

'US says leading #AI companies join safety consortium to address risks' hpe.to/6019Vhpr7


Data Engines reposted

FTC’s rule update targets deepfake threats to consumer safety dlvr.it/T2qPKQ #Technology #Law #UnitedStates #Deepfake #AI


Data Engines reposted

GitHub: AI helps developers write safer code, but basic safety is crucial dlvr.it/T2qRrT

dougbrownDBA's tweet image. GitHub: AI helps developers write safer code, but basic safety is crucial dlvr.it/T2qRrT

Data Engines reposted

Ready to take your AI safety research to the next level? UK & Canadian researchers can apply for an exchange programme. Here’s the details: 💷 £3,500 grant for logistical fees 🎓open to PhD & post-doctoral students 📅 applications close 26 March Apply now mitacs.ca/our-programs/g…

SciTechgovuk's tweet image. Ready to take your AI safety research to the next level?
UK & Canadian researchers can apply for an exchange programme. Here’s the details:
💷 £3,500 grant for logistical fees
🎓open to PhD & post-doctoral students
📅 applications close 26 March

Apply now mitacs.ca/our-programs/g…

Data Engines reposted

The Monster group, also known as the Fischer-Griess Monster, is a very large structure in mathematics, particularly in group theory. It stands as the largest of the 26 sporadic finite simple groups, boasting around 8.08 x 10⁵³ elements. Discovered through the collaborative…

PhysInHistory's tweet image. The Monster group, also known as the Fischer-Griess Monster, is a very large structure in mathematics, particularly in group theory. It stands as the largest of the 26 sporadic finite simple groups, boasting around 8.08 x 10⁵³ elements. Discovered through the collaborative…

Data Engines reposted

leading to research that cuts corners, lacks validity, fails to serve or even harms impacted communities, and generates data but not knowledge. We urge human subjects researchers to engage with on build on best practices for participatory research (ex arxiv.org/abs/2209.07572)8/10


Data Engines reposted

New paper from my group: "Using Counterfactual Tasks to Evaluate the Generality of Analogical Reasoning in Large Language Models" arxiv.org/abs/2402.08955 Thread below 🧵 (1/6)


The latest release (v0.1.5) of the ntqr Python package is out - building out the logic of evaluation in unsupervised settings so we can have provably safe evaluations of noisy agents when we give them tests for which we have no answer keys! ntqr.readthedocs.org/en/latest

dataengines's tweet image. The latest release (v0.1.5) of the ntqr Python package is out - building out the logic of evaluation in unsupervised settings so we can have provably safe evaluations of noisy agents when we give them tests for which we have no answer keys!
ntqr.readthedocs.org/en/latest

Any intelligent being, whether human or robotic, would benefit from understanding the logic of evaluation in unsupervised settings to protect itself from its own mistakes. Check out how we are building it, ntqr.readthedocs.org/en/latest


Data Engines reposted

🎥 Watch the full recording and continue the discussion: youtube.com/watch?v=M2nzXC… 🔗 Visit the website: alignment-workshop.com/nola-talks/ada… 🚀 Join us in building AI that's trustworthy and beneficial for all! Explore career opportunities at far.ai/jobs/

far.ai

Careers – FAR.AI

Join us to help ensure advanced AI systems are safe and beneficial.


If you believe, like @steveom and @tegmark , that we should have provably safe AI, check out the logic of evaluation in unsupervised settings that we have been building since 2010 with our first patent. ntqr.readthedocs.org/en/latest

dataengines's tweet image. If you believe, like @steveom and @tegmark , that we should have provably safe AI, check out the logic of evaluation in unsupervised settings that we have been building since 2010 with our first patent.
ntqr.readthedocs.org/en/latest

The future is already here. We have been building it since 2010 with our first patent for unsupervised evaluation. ntqr.readthedocs.org/en/latest

How might superintelligent AI be prevented from catastrophically dangerous actions? By using tamperproof hardware that demands mathematical proof of safety to protect key infrastructure vulnerabilities? @steveom in the latest @LondonFuturists podcast londonfuturists.buzzsprout.com/2028982/144917…

dw2's tweet card. Provably safe AGI, with Steve Omohundro - London Futurists

londonfuturists.buzzsprout.com

Provably safe AGI, with Steve Omohundro - London Futurists



Data Engines reposted

GenAI brings new challenges, like deepfakes and misinformation threats. @ActiveFence is at the forefront, proactively leading the way in #AI #safety solutions. Check out our latest @TechCrunch article with @GroveVentures. techcrunch.com/2024/02/10/saf…


Data Engines reposted

CALL FOR PAPERS: Here it is, the call for papers for early career scholars to speak at The Lyceum Project - our AI Ethics with Aristotle conference taking place on June 20th, 2024 in Athens, Greece. Deadline for submissions is April 30th. Apply now! oxford-aiethics.ox.ac.uk/lyceum-project…

EthicsInAI's tweet image. CALL FOR PAPERS: Here it is, the call for papers for early career scholars to speak at The Lyceum Project - our AI Ethics with Aristotle conference taking place on June 20th, 2024 in Athens, Greece. Deadline for submissions is April 30th. Apply now! oxford-aiethics.ox.ac.uk/lyceum-project…

Loading...

Something went wrong.


Something went wrong.