Post by : Monika
Photo: Reuters
The European Union (EU) has started giving important advice to companies and creators who build AI (Artificial Intelligence) systems. This advice is about how they can follow new laws that protect people and make sure AI is safe to use. The new rules focus especially on AI systems that could cause big problems if something goes wrong.
Why Is the EU Giving These Rules?
AI technology is growing very fast. It is used in many areas like medicine, finance, transportation, and more. While AI can help a lot, it can also create risks if it is not used carefully. For example, some AI systems might make wrong decisions, show unfair results, or even harm people.
The EU wants to make sure AI is used safely and fairly. To do this, it has made new laws called the "EU AI Act." These laws say how AI systems should be built and used to avoid risks.
What Are AI Models with Systemic Risks?
Some AI systems have bigger effects than others. If they fail or make mistakes, the problems they cause can affect many people or important parts of society. These AI systems are called “AI models with systemic risks.”
For example, AI used in banks to approve loans, or AI that helps control power grids, or AI used to check important government work can have systemic risks. If these systems do not work properly, they can cause large problems for many people.
Because of these risks, the EU is especially careful about these kinds of AI.
What Does the New Advice Say?
The EU’s new advice helps AI builders understand what they need to do to follow the rules. It gives clear steps and tips on how to design AI that is safe, fair, and transparent.
Some important points in the advice are:
Risk Assessment: AI builders should carefully check where their AI could cause harm or mistakes. They must think about how serious the problems could be and who might be affected.
Data Quality: The data used to train AI must be good and balanced. If the data is wrong or unfair, the AI might make biased or incorrect decisions.
Transparency: People using AI systems should know how the AI works and what it does. This means companies should explain AI decisions clearly so users can understand them.
Human Oversight: AI should not make important decisions without people checking. Humans should be able to review and control AI decisions, especially for big risks.
Security Measures: AI systems must be protected from hackers or attacks. The advice says companies should build strong security into their AI.
Testing and Monitoring: AI models must be tested before use and regularly checked during use to find and fix problems quickly.
Who Needs to Follow These Rules?
The advice is mainly for companies and groups that create or use AI with big risks. This includes banks, hospitals, energy companies, governments, and others who use AI in important ways.
Even smaller companies must pay attention if their AI affects many people or important services.
Why Is This Important?
AI is becoming part of everyday life. We see it in voice assistants, search engines, medical diagnosis, and more. But when AI affects important decisions, it must be trustworthy.
If AI systems make mistakes, it can hurt people’s lives, cause unfair treatment, or disrupt critical services. That’s why the EU wants to make sure AI is safe and responsible.
By giving this advice, the EU helps AI makers avoid problems before they happen.
How Will This Help People?
With these rules and advice, people can trust that AI systems used around them are checked and safe. For example:
The rules are designed to protect people, so the EU takes violations seriously. This pushes companies to take responsibility and build better AI systems.
How Does This Fit with Other Global Efforts?
Other countries like the United States and parts of Asia are also working on rules for AI. The EU’s approach is one of the most detailed and strict.
By leading with clear rules, the EU hopes to set an example for safe and ethical AI worldwide.
What Is Next for AI and the EU?
AI is a powerful tool that can help many parts of life, but it also comes with risks. The European Union is making sure that AI systems with big effects follow strong rules to keep people safe and treat them fairly.
With clear advice and laws, the EU wants AI to be trustworthy and used in ways that benefit everyone. This new guidance helps AI creators understand their responsibilities and how to follow the law.
For people living in the EU, this means safer AI systems and better protection from AI mistakes. It also sets a path for the future where AI and humans can work together responsibly.
systemic risk AI
Dodgers Win Game 1 of Wild Card Series Against Reds
The Dodgers beat the Reds 10-5 in Game 1 of the Wild Card Series. Ohtani and Hernández hit two home
Tyreek Hill Suffers Major Knee Injury, Out for the Season
Miami Dolphins' star receiver Tyreek Hill tears ACL and other ligaments in win over Jets; season-end
China Raises Flag at Disputed Shoal in National Day Ceremony
On China’s National Day, its coast guard held a flag ceremony at Scarborough Shoal, asserting contro
Netanyahu Bets Big on Trump’s Gaza Plan, Faces Home Risks
Netanyahu supports Trump’s Gaza peace plan to regain global support, but his far-right partners in I
Ukraine’s frontline cities face fear but refuse to give up
In Ukraine’s frontline towns, people live with fear and danger daily, yet they show courage, refusin
US Government Shuts Down After Congress Fails to Agree
On October 1, 2025, the U.S. government began a partial shutdown due to Congress's inability to pass