FairSense: Integrating Responsible AI and Sustainability
Authors: Shaina Raza, Mark Coatsworth, Tahniat Khan, and Marcelo Lotif A new AI-driven platform extends bias detection to include text and visual content, while leveraging energy-efficient AI frameworks. Developed by [ ] The post FairSense: Integrating Responsible AI and Sustainability appeared first on Vector Institute for Artificial Intelligence .
Authors: Shaina Raza, Mark Coatsworth, Tahniat Khan, and Marcelo Lotif
A new AI-driven platform extends bias detection to include text and visual content, while leveraging energy-efficient AI frameworks. Developed by Shaina Raza, an Applied ML Scientist in Responsible AI, and Vector’s AI Engineering team, FairSense-AI balances energy efficiency and bias safety.
With data centres accounting for up to 2% of global electricity usage, concerns about GenAI’s environmental sustainability are rising alongside existing challenges around bias and misinformation. FairSense-AI leverages energy-efficient AI frameworks while providing an AI-backed framework to identify bias in multi-modal settings and an AI-driven risk management tool, providing users with a structured approach to identifying, assessing, and mitigating AI-related risks. A Python package allows programmers to easily integrate FairSense-AI into software code.
Fairsense-AI analyzes text for bias, highlighting problematic terms and providing insights into stereotypes. The tool demonstrates how AI can promote fairness and equity in language analysis
What Does it Do?
Building on UnBias, a previous bias neutralization tool developed by Vector, FairSense-AI identifies subtle patterns of prejudice, stereotyping, or favoritism to enhance fairness and inclusivity in digital content (text and images). Additionally, FairSense-AI leverages large language models (LLMs) and large vision models (VLMs) that are optimized for energy efficiency, minimizing its environmental impact.
Optimization techniques reduced emissions to just 0.012 kg CO2, demonstrating that responsible AI practices can be both environmentally impactful and cost-effective in training LLMs
The tool’s reduced environmental impact can be seen when comparing the carbon emissions from Llama 3.2 1B (one of the foundational models integrated into it) before and after optimization and fine-tuning. Emissions were reduced from 107,000 kg to just 0.012 kg per hour of inference, highlighting how green AI goals can be achieved without compromising on functionality or flexibility. The CodeCarbon software package was used to assess the environmental impact of code execution. The tool tracks electricity consumption during computation and converts it into carbon emissions based on the geographical location of the processing. Carbon emissions were measured in kilograms (kg).
How Does It Work?
FairSense-AI collects text and image data from various sources and then uses LLMs and VLMs to detect subtle patterns of bias. It assigns a score based on the severity of the bias and offers recommendations for more fair and inclusive content. Throughout the process, FairSense-AI incorporates energy-efficient optimization techniques to align responsible AI with sustainability goals, leveraging local resources and free tools such as Kiln.
Fairsense-AI can analyze visual bias, highlighting systemic gender inequality in opportunities and resources
Fairsense Framework
-
Data Preprocessing: collects and standardizes text and image data.
-
Model Analysis: uses LLMs/LVLMs to detect content imbalances.
-
Bias Scoring: quantifies and highlights bias severity.
-
Recommendations: provides strategies for bias reduction.
-
Risk Identification: identifies AI risks for informed decisions.
-
Sustainability: optimizes processes for eco-conscious bias mitigation.
The science behind Fairsense’s optimization lies in leveraging advanced techniques including model pruning, mixed-precision training, and fine-tuning, to reduce model complexity while preserving performance. By selectively removing less critical parameters, switching to efficient numerical representations, and carefully refining pre-trained models, Fairsense significantly lowers computational demands and energy consumption. This streamlined approach not only maintains high accuracy and nuanced bias detection and risk identification, but also aligns with sustainability goals by minimizing the carbon footprint,
Moving forward, Vector researchers hope to add an AI risk management component that can identify AI risks, such as disinformation, misinformation, or linguistic and visual bias, based on queries. This risk management framework, designed by Tahniat Khan, will draw on the MIT Risk Repository and the NIST Risk Management Framework, aligning with widely recognized best practices for effective AI risk management.
Conclusion
Technology can be both transformational and ethical; while generative AI is a powerful tool, that also introduces a new set of risks. FairSense-AI sets a new standard for responsible AI innovation by making bias detection and risk identification accessible to both technical and non-technical audiences while maintaining a focus on energy efficiency. It is possible to prioritize responsible AI practices that benefit society and the planet without sacrificing innovations. With solutions like this we can harness AI’s potential while ensuring a more equitable and sustainable future for all.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
platform
Vlaanderen stelt leeftijdsgrens van 13 jaar in voor schadelijke sociale media
De Vlaamse regering komt met een wettelijke leeftijdsgrens van 13 jaar voor schadelijke sociale media . Diensten moeten kinderen onder de 13 jaar daarmee verplicht van hun platform weren. Welke diensten precies als schadelijk worden aangemerkt, is nog niet geheel duidelijk.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products
b8653
jinja: coerce input for string-specific filters ( #21370 ) macOS/iOS: macOS Apple Silicon (arm64) macOS Intel (x64) iOS XCFramework Linux: Ubuntu x64 (CPU) Ubuntu arm64 (CPU) Ubuntu s390x (CPU) Ubuntu x64 (Vulkan) Ubuntu arm64 (Vulkan) Ubuntu x64 (ROCm 7.2) Ubuntu x64 (OpenVINO) Windows: Windows x64 (CPU) Windows arm64 (CPU) Windows x64 (CUDA 12) - CUDA 12.4 DLLs Windows x64 (CUDA 13) - CUDA 13.1 DLLs Windows x64 (Vulkan) Windows x64 (SYCL) Windows x64 (HIP) openEuler: openEuler x86 (310p) openEuler x86 (910b, ACL Graph) openEuler aarch64 (310p) openEuler aarch64 (910b, ACL Graph)

The Path to Autonomous Agents Was Mapped Decades Ago. Nobody Noticed.
If you’re building autonomous AI agents, you already know the feeling. The technology is extraordinary — and maddeningly insufficient for the job. Context windows are larger than ever, but your agent still loses the thread on long tasks. Reasoning is sharper — but the hallucinations that slip through look more real than the data. You build a harness — constraints, verification loops, evaluation layers — and the agent gets better. Then the edge cases multiply. Then the real-world integrations start failing in ways no test suite anticipated. You’re solving a puzzle that keeps adding pieces. This is exactly what I was going through at Rishon , building agents that handle real business operations autonomously — hours-long phone calls with real people, real money on the line. Everything you’re




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!