Skip to content

RECENT POSTS

Unlock Success with NUCIDA: Your Trusted Partner in Transformation

Are you navigating the complexities of software quality or digital transformation? At NUCIDA, we specialize in supporting leaders like you - CEOs, CIOs, IT, quality, and test managers - by tackling your most urgent challenges or crafting strategies for your next big project.

Our consulting expertise ensures your company stays competitive and thrives in today's fast-paced market. Whether you need immediate solutions or a long-term vision, our team delivers results that drive success. Ready to elevate your business? Discover how we can empower your growth and innovation.

In this blog, we share the latest trends, tools, and techniques shaping the future of software development and quality assurance.

The Dawn of ISO/IEC 42119

 

Imagine a world where your chatbot doesn't accidentally spew harmful advice, where self-driving cars navigate chaos without hallucinating a phantom pedestrian, and where AI-powered medical diagnostics catch diseases before they whisper a warning. Sounds like science fiction? Not anymore. Enter ISO/IEC 42119, the groundbreaking international standard that's arming the AI revolution with the rigor it desperately needs. As of late 2025, this suite of technical specifications is no longer inked on paper; it's the blueprint for trustworthy AI, transforming wild innovation into fortified intelligence.

artificial intelligence 01Is ISO/IEC 42119 the key to unlocking ethical AI, or just another layer of bureaucracy?.


Revolutionizing AI Safety

If you've been following the AI boom (and who hasn't?), you know the headlines: generative models churning out deepfakes, biased algorithms perpetuating inequality, and rogue systems vulnerable to clever hacks. Regulators are scrambling, ethicists are sounding alarms, and developers are left wondering how to build without breaking the law. ISO/IEC 42119 steps in as the unsung hero, a comprehensive framework for testing AI systems.

Born from the collaborative genius of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC), it's not about stifling creativity; it's about channeling it safely into the real world.


The Anatomy of ISO/IEC 42119: From Blueprint to Battle-Tested AI

At its core, ISO/IEC 42119 isn't a single rulebook; it's a modular series of technical specifications (TS) designed to evolve with AI's relentless pace. Think of it as a Swiss Army knife for AI validation: versatile, precise, and ready for any scenario. Launched in draft form earlier this year, the standard has already sparked a wave of adoption, with final publications expected to trickle out by mid-2026. Here's a peek under the hood, listing some of the interesting parts:

  • Part 2: Overview of Testing AI Systems (ISO/IEC DTS 42119-2). This is your mission control. It lays out a high-level roadmap for evaluating AI, from functional checks (Does it do what it's supposed to?) to robustness tests (Can it handle edge cases without crumbling?). It's agnostic to AI flavors, whether you're tweaking a neural net for image recognition or fine-tuning a large language model like me, ensuring every system gets a fair stress test.
  • Part 3: Testing for Bias and Fairness (ISO/IEC DTS 42119-3). In an era where AI mirrors society's flaws, this part is the ethical scalpel. It guides developers through detecting and mitigating biases in data, models, and outputs, aligning with UN Sustainable Development Goals such as good health and responsible consumption. Picture auditing a hiring algorithm to ensure it doesn't favor one demographic over another, 42119-3 makes that systematic, not speculative.
  • Part 7: Red Teaming for AI Security (ISO/IEC AWI TS 42119-7). Ah, the thrill of the hunt! This one's for the cybersecurity daredevils. It codifies "red teaming", a simulated adversarial attack method that exposes vulnerabilities, such as tricking an AI into generating malicious code or bypassing safety filters. Drawing from military-grade tactics, it emphasizes comprehensive risk identification and structured testing protocols, turning potential disasters into debugged triumphs.
  • Part 8: Quality Assessment of Prompt-Based Text-to-Text Systems (ISO/IEC NP TS 42119-8) zooms in on generative AI wonders like chatbots and summarizers.

Closing voting in June 2025, it tackles prompt engineering pitfalls, incorporating red teaming to evaluate safety and quality. If your AI is "prompted" to write essays or code, this ensures it's not just clever, but it's compliant.


NUCIDA Request a Call


Why ISO/IEC 42119 Matters: Beyond Compliance, Into Competitive Edge

Let's cut the jargon: In 2025, AI isn't optional. It's oxygen for industries from finance to fashion. But with great power comes... well, you know the drill. The EU AI Act's leaner 2025 iteration demands rigorous testing, and ISO/IEC 42119 is the golden ticket to compliance without the headache. Moreover, it's a strategic superpower.

Consider the numbers: A 2025 Deloitte report pegs AI-related breaches at $4.5 trillion annually by 2026 if unchecked. ISO/IEC 42119 flips the script, enabling proactive fortification. Companies like Advai are already evangelizing it, noting how red teaming turns audits into assets, spotting flaws early saves millions, and builds unbreakable trust.

For developers, it's liberating: Standardized tests mean less reinventing the wheel, more iterating on brilliance. Ethicists cheer its bias-busting focus, aligning AI with global equity goals. And for end-users? Safer tools that empower rather than endanger.

Take healthcare: An AI diagnosing X-rays under 42119-3's fairness lens could prevent misdiagnoses skewed by underrepresented data, saving lives in underserved communities. Or e-commerce: Generative recommenders tested via Part 8 avoid toxic suggestions, boosting sales and satisfaction.


EU AI Act: Europe's Groundbreaking AI Regulation

Besides the ISO 42119, the European Union has released the EU AI Act that stands as the world's first comprehensive legal framework for governing artificial intelligence. Published in the Official Journal on July 12, 2024, and entering into force on August 1, 2024, this landmark legislation aims to foster trustworthy, human-centric AI while mitigating its risks to safety, fundamental rights, and society. Building on the success of the GDPR, the AI Act adopts a risk-based approach to balance between innovation and protection, positioning Europe as a global leader in ethical AI governance.

The Act classifies AI systems into four tiers based on potential harm, with escalating obligations. Most AI falls into the minimal-risk bucket, but high-stakes uses face rigorous scrutiny. 

Risk Level Description Examples Obligations Effective Date
Unacceptable Risk Banned practices that threaten rights, safety, or democracy. Social scoring by governments; real-time biometric identification in public spaces (except limited law enforcement); emotion recognition in workplaces/education; untargeted facial recognition database scraping; manipulative subliminal techniques. Complete prohibition. February 2, 2025
High Risk Systems impacting health, safety, or rights; conformity assessment. AI in critical infrastructure (e.g., traffic control); education (e.g., exam grading); employment (e.g., CV screening); credit scoring; law enforcement (e.g., predictive policing); medical devices; biometric categorization inferring sensitive traits. Risk management, high-quality data, transparency, human oversight, cybersecurity, logging, documentation, and post-market monitoring. Third-party audits for some. August 2, 2026 (general); August 2, 2027 (if embedded in regulated products)
Limited Risk (Transparency) Systems need user awareness to prevent deception. Chatbots; deepfakes; AI-generated text/images (must be labeled, e.g., watermarks for deepfakes). Disclose AI interaction; label synthetic content. August 2, 2026
Minimal/No Risk Everyday low-impact AI. Spam filters, video games, and recommendation engines (non-personalized). None, voluntary codes encouraged. N/A

Key Obligations for Providers and Deployers

  • Providers: Conduct conformity assessments for high-risk/GPAI; ensure robustness and accuracy; register systems in an EU database; report malfunctions within 15 days.
  • Deployers: Monitor operations; ensure trained personnel for high-risk uses; conduct fundamental rights impact assessments (especially for public bodies).
  • Shared: Post-market monitoring; cooperation with authorities; early voluntary compliance via the AI Pact (launched 2024).

Support includes the AI Act Service Desk and regulatory sandboxes for safe testing.

Governance and Enforcement

  • European AI Office: Oversees implementation, supervises GPAI, and fosters international collaboration. Recruited AI specialists in March 2025.
  • National Authorities: Market surveillance and notifying bodies (designated by August 2025; three Member States fully ready as of late 2025).
  • Advisory Bodies: AI Board (coordination), Scientific Panel (technical advice), Advisory Forum (stakeholder input).
  • Penalties: Up to €35 million or 7% of global turnover for prohibited practices; €15 million or 3% for other violations. Whistleblower protections tie into the EU Whistleblowing Directive.

The Act integrates with GDPR, Product Liability Directive, and sector-specific laws, with proposals clarifying overlaps in November 2025.


NUCIDA Request a Call


No More Testing Headaches with NUCIDA!

Building top-notch software doesn’t have to be a struggle. At NUCIDA, we’ve cracked the code with our B/R/AI/N Testwork testing solution - pairing our QA expertise with your test management tool to deliver streamlined processes, slick automation, and results you can count on. On time. Hassle-free. Ready to ditch future headaches? Let NUCIDA show you how!

NUCIDA certified consulting partner 03

Among others, NUCIDA's QA experts are certified consultants for Testiny, SmartBear, TestRail, and Xray software testing tools.

Why Choose NUCIDA?

For us, digitization does not just mean modernizing what already exists but, most importantly, reshaping the future. That is why we have made it our goal to provide our customers with sustainable support in digitizing the entire value chain. Our work has only one goal: your success! 

  • Effortless Tool Setup: We’re test management wizards, simplifying setup and integrating it with your favorite testing tools. Boost efficiency and accuracy with configurations tailored to your unique goals - complexity made easy.
  • Superior Test Management: Our expert consulting supercharges your test management experience. Whether you’re launching a test management tool or leveling up, we streamline your testing for top-notch outcomes with precision and customization.
  • Top-notch Automation: Our certified automation pros build frameworks that fit like a glove, integrating seamlessly with Xray. From fresh setups to fine-tuning, we deliver fast, flawless results.
  • Flawless Test Execution: Our certified testers bring precision to every manual test, ensuring your apps shine with unbeatable reliability and performance. Quality? Nailed it.
  • Insightful Reporting: Unlock game-changing insights with your tool's reporting tweaked to your needs. Our detailed quality reports empower smart, reliable decisions at every level.
  • Proven Reliability: With 30+ years of experience, proprietary frameworks, and certified expertise, we craft efficient, easy-to-maintain solutions that keep you ahead of the curve.

Don’t let testing slow you down. Explore how consulting services can make your software quality soar - headache-free! Got questions?  We’ve got answers. Let’s build something amazing together!


The Road Ahead: AI's Litmus Test for Trust

As we hurtle toward 2030, when AI is expected to automate 45% of work tasks (per McKinsey), ISO 42119 isn't just timely. Furthermore, it's transformative and offers the necessary QA related standards to support the mentioned EU AI Act. It's the guardrail ensuring our digital dreams don't derail into dystopia. Critics might whisper "overregulation," but history whispers back: Standards like ISO 9001 revolutionized quality control; 42119 could do the same for artificial intelligence.

So, innovators, policymakers, and curious minds: Dive in. Grab the drafts from ISO's site, join a working group, or pilot a red team exercise. The future of AI isn't written in code alone. It's tested in the fire of standards like this, and the NUCIDA QA and AI experts can guide you through it.


A Turning Point, Not a Finish Line

As 2025 draws to a close, the EU AI Act is no longer a proposal gathering dust in Brussels. It is law, with prohibited practices already banned, general-purpose AI providers under binding obligations, and the countdown ticking loudly toward the high-risk rules of August 2026. Europe has drawn a line in the sand: innovation is welcome, but only if it is transparent, accountable, and respectful of the rights that define European democracy. Developers, audit your code using tools like SonarQube, and stay vigilant.

The hardest work now begins: turning thousands of pages of legal text into everyday engineering practice, into red-team reports, bias audits, and fundamental rights impact assessments. Standards like ISO/IEC 42119 will be indispensable companions on that journey, translating the Act’s principles into testable, repeatable processes.

Ultimately, the EU AI Act marks not the end of AI risk, but the beginning of responsible AI scale. The tools are on the table. The choice is ours.

What do you think? Is ISO/IEC 42119 the key to unlocking ethical AI, or just another layer of bureaucracy? Drop your thoughts in the comments. Let's take this conversation to the next level.

Stay curious, stay safe, and see you next time!

Testing AI Systems Systems

Want to know more? Watch our YouTube video,  Testing AI Systems, to learn more about the latest developments.

Pictures / Logos from pixabay.com and NUCIDA Group
Article written and published by Torsten Zimmermann

Any questions or hints? Please leave a comment...