-
Robust ML
A community-run reference for state-of-the-art adversarial example defenses.
-
inlinity - Trusting Artificial Intelligence
We help to create the foundations of reliable and trustworthy AI systems.
-
Home - Preventing the Unpreventable | Qwietᴬᴵ
Qwiet AI secures applications with AI-powered code analysis, natural language insights, and context-aware findings - built to empower developers and protect your software supply chain.
-
Trustworthy AI
# Deep Learning | Interpretability # Deep Learning | Out-of-Distribution Detection | Adversarial Robustness # Deep Learning | Testing ML | Adversarial Robustness · deep, learning, adversarial, robustness, interpretability
-
Skyld
Transform your AI models into safe and valuable assets. Secure your on-device AI models against reverse engineering with a low computing footprint. Protect your competitive edge and technological advance. Deploy, manage… · models, against, protect, award, europe
-
Confident Security
Verifiably private Inference API Develop AI products without worrying about security, privacy, or compliance We take financial responsibility for any breach or misuse · verifiably, private, security, privacy, take
-
The Gatekeeper for Safe AI - ClueoAI
Pioneering research in AI security and safety. Building robust systems that ensure responsible development and deployment of artificial intelligence. Our research spans multiple critical areas in AI security and safety… · systems, security, research, safety, robust
-
BlindAI
Discover BlindAI: Open-source AI privacy via Trusted Execution Environments and Python for secure model querying.
-
TRUE AI LAB :: Trustworthy Foundation, Responsible Future
-
Adversarial Model Analysis
The explosion of interest in machine learning coupled with the increasing availability of large amounts of data and tools for simple development of models has quickly led to an avalanche of examples of hurtful, unsafe or… · models, data, tools, examples, model
-
Robust AI
Immediate ROI with a no Capex, all-inclusive solution. Zero infrastructure changes required and rapid deployment. Start small and easily scale up as successful milestones are hit. Seamlessly switch between workflows as… · immediate, roi, capex, inclusive, solution
-
OffsideAI
Effortlessly manage your model training and inference pipelines and streamline your ML operations with our state-of-the-art, end-to-end ML-Ops platform. Rest assured knowing that our AI solutions are built with a strong… · end, effortlessly, manage, model, training
-
www.predictable-ai.org
Progress in AI over the last decade has been accompanied by a lack of robustness, rooted in a predictability crisis: highly-accurate image classifiers fail on images with an unusual background, robots surprise us with unanticipated reward hacking phenomena, and powerful language models display
-
Home - Duality AI
Virtual worlds for solving real problems. Digital twin simulation that drives safe, predictable, and resilient smart systems
-
artificialintelligencenow.com
-
AI EdgeLabs
AI EdgeLabs - The most powerful AI cybersecurity tool
-
Adversarial Robustness - Theory and Practice
This web page contains materials to accompany the NeurIPS 2018 tutorial, "Adversarial Robustness: Theory and Practice", by Zico Kolter and Aleksander Madry. The notes are in **very early draft form**, and we will be updating them (organizing material more, writing them in a more consistent form with the relevant citations,...
-
Home - ASI | Artificial Superintelligence Alliance
The universal network for decentralized AI infrastructure; delivering secure interoperability, scalable and customizable consensus, and privacy-preserving peer-to-peer networks for data indexing and exchange, compute… · infrastructure, scalable, peer, compute, developers
-
Homepage – Magazin des Fraunhofer-Instituts für Kognitive Systeme IKS
Das Fraunhofer IKS ist Ihr Partner bei der Entwicklung innovativer Software-Ansätze für die Bereiche Industrie 4.0 und autonomes Fahren.
-
Examples of AI Improving AI
Examples of AI Improving AI
-
Home - Safe & Trusted AI
The Safe and Trusted AI PhD programme focusses on the use of symbolic AI techniques for ensuring the safety and trustworthiness of AI systems. The Centre’s direction, training and supervision is delivered by a team of… · training, centre, techniques, safe, symbolic
-
AI Trust Management & Runtime Security Platform - Trustwise
delivering a seamless, industry-optimized API for AI trust, ensuring business alignment and ethical integrity across all AI models
-
Home
Empower your AI with transparency, security, and efficiency. Eliminate black-box uncertainties, reduce risks, harden models against attacks, and deploy confidently with dynamic testing, backdoor detection, and privacy-first tools. Take control today.
-
Home - Consilience AI
We deliver mission critical insights by uncovering hidden relationships.
-
RobustifAI | Generative AI through human-centric integration of neural and symbolic methods
RobustifAI aims to develop a rigorous design and deployment methodology tailored for reliable, robust and trustworthy Generative Artificial Intelligence (GenAI).
-
Threat Intelligence | Abnormal AI
Where Emerging Threats Meet Expert Analysis
-
Home - GUARDDOG AI
The Trusted Leader in Edge-to-Edge Cybersecurity Visibility and Response
-
OpenAI
Leading AI governance platform for enterprise compliance and responsible AI deployment. Automate EU AI Act compliance and risk management.
-
SAFE AI Taskforce
Stakeholders Advocating for Fair and Ethical AI in Interpreting
-
Enable Organizations to Deploy Accurate AI Agents Easily
Transform your AI approach by enabling organizations to deploy reliable and responsible AI agents that work with your data
-
Home
Software framework for runtime-Adaptive and secure deep Learning On Heterogeneous Architectures
-
Home - Center for Responsible AI
Sparking the next generation of AI Products
-
Home - SAFE Rural Health
Sensing, ArtiFicial intelligence, and Edge networking towards Rural Health monitoring. Sensing, ArtiFicial intelligence, and Edge networking towards Rural Health monitoring. Focuses on setting up a pilot project to… · health, rural, monitoring, sensing, artificial
-
robustai.net
-
Home - Styrk
A comprehensive suite of capabilities to find, monitor and mitigate AI Model issues Scan, Spot, and Soften adversarial attacks and thoroughly understand your AI/ML models’ vulnerabilities. Quantitatively measure the… · monitor, models, measure, comprehensive, mitigate
-
WFVML '24
[Proposal] About This Workshop
As data and computing infrastructures become increasingly abundant, machine learning (ML) systems are applied to ever more problems. However, for safety-critical domains, their high performance alone is not enough: applications like autonomous driving, robotics
-
Home
Making society more resilient by using human understanding, state-of-the-art technology (eBPF, Webassembly and AI) and empirical methods for achievable threat-informed cloud security - 100% independent - 100% reality… · using, 100, github, com, k8sstormcenter
-
Start - Safe AI Engineering
AI Engineering als Enabler für eine fundierte Sicherheitsargumentation über den gesamten Lebenszyklus einer KI-Funktion.
-
Robust Machine Learning – Max Planck Institute for Intelligent Systems
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems.
-
Adversarial Robustness Toolbox
Adversarial Robustness Toolbox is an LF AI & Data Foundation Project Adversarial Robustness Toolbox (ART) provides tools that enable developers and researchers to evaluate, defend, and verify Machine Learning models and… · art, modules, adversarial, machine, learning
-
SAIL - Home
The overarching aim of the Safe Artificial Intelligence Lab is to develop novel computational methods and tools for providing safety guarantees to a wide range of autonomous systems, including autonomous vehicles… · safe, systems, artificial, intelligence, autonomous
-
LiVe Lab: Home
The Learning in Verification group (LiVe Lab) focuses on the interactions of machine learning and verification. Our research includes Explainable AI, Verification of Neural Networks, Stochastic Games and Control… · verification, learning, neural, networks, ltl
-
Trusted Autonomy - Home
Trusted Autonomy,
Data Analytics,
Autonomous Analytics,
Decision Analytics,
Big Data,
Autonomous Systems,
Trust,
Autonomy,
Artificial Intelligence,
Cognitive Cyber Symbiosis
-
eyeknow.ai - Putting the eye on the things we don't know
Health & safety, operational efficiency and risk mitigation using cutting
edge AI and CCTV
-
SPT Model—Exploring Safety Insights in AI Competition
-
TruthfulAI
TruthfulAI: Building reliable and trustworthy AI systems for the future.
-
Home - CERTAIN | Center for Explainable Transparant and Theory-Driven AI in the Netherlands
Center for Explainable Responsible and Theory-driven Artificial Intelligence More information about CERTAIN Learn more