In a nutshell
Explore the links between artificial intelligence (AI) and the GDPR (General Data Protection Regulation), highlighting the benefits, risks, and constraints related to the use of AI in schedule management, especially through tools such as PlanningPME.
What is the definition of the GDPR?
The GDPR (General Data Protection Regulation) is a European regulation that came into force on the 25th of May 2018, which aims to protect the personal data of citizens of the European Union (EU) and to harmonise data protection laws within the EU.
The GDPR is a legal framework that sets out the rules for the collection, processing, retention, and security of individuals' personal data. It gives citizens greater control over their data while imposing obligations on the companies and organisations that collect or process this data.
What is AI?
Artificial intelligence (AI) is a discipline of computer science that aims to create systems that can simulate human cognitive processes, such as learning, reasoning, problem-solving, object or sound recognition, and decision-making. In other words, AI allows machines to perform complex tasks that previously required human intervention.

What are the dangers of AI in relation to the GDPR?
The dangers of artificial intelligence (AI) in relation to the General Data Protection Regulation (GDPR) mainly concern the protection of personal data and the rights of individuals. Here are some critical points to consider:
Bulk collection of personal data :
AI systems, especially those based on machine learning, need large amounts of data to be effective. This may result in excessive or unnecessary collection of personal data. Under the GDPR, companies must ensure that only the data that is strictly necessary is collected and used (data minimisation principle).
Bias and discrimination :
AI algorithms can be biased based on training data, which can lead to unfair discrimination, for example on the basis of race, gender or ethnicity. The GDPR imposes transparency and fairness obligations, which means that automated decisions must not have disproportionately negative effects on certain categories of people.
Lack of transparency :
Many AI algorithms work like "black boxes," making it difficult for individuals to understand how their data is being used or decisions made about it. The GDPR requires transparency about how personal data is processed and algorithms that influence significant decisions about individuals.
Violation of the right to erasure ("right to be forgotten") :
AI systems can make it difficult to enforce the right to erasure (Article 17 GDPR), as data can be disseminated across multiple systems or irreversibly transformed. Companies using AI must put mechanisms in place to allow the erasure of personal data at the request of users.
Automated decision-making :
The GDPR grants individuals the right not to be subject to fully automated decisions that have legal or significant effects on them (Article 22). However, many applications of AI can fall into this category, particularly in the banking or human resources sectors. Companies must obtain explicit consent from the individual or ensure that other safeguards are in place to protect users' rights.
Data Security :
AI systems can be vulnerable to cyberattacks, endangering the security of personal data. The GDPR requires appropriate security measures to protect data from breaches.
Liability issues :
If an AI system causes a data breach or damage due to automated decisions, it can be difficult to determine who is responsible: the creator of the algorithm, the entity using the AI, or another party. The GDPR imposes significant penalties for violations, so clarifying responsibilities is essential.
In summary, the dangers of AI in relation to the GDPR are mainly related to excessive data collection, biases in automated decisions, lack of transparency, and the difficulty of respecting certain fundamental rights, such as the right to be forgotten. Companies must be particularly vigilant when using AI in processes involving personal data.
Does AI really comply with the principles of the GDPR?
Whether AI truly complies with the principles of the GDPR is complex and depends on how artificial intelligence is implemented, managed, and monitored. The GDPR sets clear rules for the protection of personal data, and AI systems must comply with them. However, several technical and ethical challenges arise in this context. Here are the main aspects to consider:
- Data minimisation principle :
The GDPR requires that only data that is necessary for a specific purpose is collected and processed. However, AI, especially machine learning systems, tends to rely on large amounts of data to "learn" and improve its performance. Adhering to this principle in AI systems can be difficult, as it can be tempting to accumulate data to improve algorithms, even if some of it is not strictly necessary.
- Explicit and informed consent :
The GDPR requires individuals to give explicit and informed consent for their data to be used. This means that they need to know how their data will be used by AI. However, the complexity of AI algorithms often makes it difficult to clearly explain to users how their data will be processed, and whether AI systems still adhere to this principle is a controversial issue.
- Right to be forgotten and rectification of data :
The GDPR grants individuals the right to request the erasure of their personal data ("right to be forgotten") or the rectification of inaccurate data. With AI, especially in machine learning-based systems, once data is used to train a model, it can be difficult to completely remove that data or correct the impact of incorrect data. Compliance with this principle is particularly problematic, as AI systems can keep track of data even after it has been formally deleted.
- Automated decision-making and the right to human intervention :
The GDPR prohibits companies from subjecting individuals to fully automated decisions (such as those made by AI) without human intervention when they have legal or significant consequences. This means that mechanisms must be put in place to allow a human to intervene and challenge decisions made by an AI. In practice, it is often difficult to ensure sufficient human oversight over AI systems, especially when they are widely used in critical processes (such as recruitment or granting credits).
- Transparency and explainability :
The GDPR requires transparency on how personal data is processed, which includes a clear explanation of how an automated decision was made. AI algorithms are often opaque (a "black box" phenomenon), making it difficult for organisations to comply with GDPR transparency requirements. Many AI technologies are not yet developed enough to provide understandable explanations to users, which calls into question their compliance with this principle.
- Data Security :
The GDPR imposes security measures to protect personal data against loss, unauthorised access, or unlawful processing. AI systems, especially those based in the cloud or on complex architectures, can be vulnerable to cyberattacks, posing a risk to the security of personal data. If data breaches do occur, it can lead to heavy penalties for companies under the GDPR, especially if the data processed by AI has not been properly secured.
AI can comply with the principles of the GDPR, but this requires constant vigilance and significant efforts to adapt systems to the requirements of the regulation. Many AI companies and developers are working to improve transparency, security, and data management to meet GDPR requirements, but there are still significant challenges to overcome, especially when it comes to data minimisation, automated decision-making, and algorithm explainability. As it stands, the strict application of GDPR principles in AI systems is not always guaranteed, especially in more complex areas.
Can AI collect my data without my consent?
No, in theory, AI cannot collect your personal data without your consent, according to the General Data Protection Regulation (GDPR). The GDPR imposes strict rules on the collection, use and processing of personal data. However, there are nuances and exceptions to this rule, as well as challenges in practice.
Here's an overview:
- Express consent required :
The GDPR requires companies and systems that process personal data to obtain explicit and informed consent before collecting or processing data. This means that users must be informed about how their data will be used, by whom, and for what purposes. To be valid, consent must be freely given, specific, informed and unambiguous. Users must be given the opportunity to accept or refuse the processing of their personal data.
- AI and the difficulty of obtaining clear consent :
AI systems that use data collection methods, such as behavioural tracking or analysis of user preferences, can collect data in a more discreet way, sometimes without users being fully aware of the types of data captured. In some cases, AI systems are embedded in platforms or applications that might not inform users clearly enough about data collection or obtain ambiguous consent (e.g., via complicated interfaces or pre-ticked boxes). However, according to the GDPR, this type of implicit collection is not compliant, and consent must be explicit and informed.
- Traceability and transparency :
The GDPR requires complete transparency on how data is collected and processed. Users should be able to understand what data is being collected and for what purposes. AI systems must therefore be configured to inform users about their data processing, often via privacy policies, contextual notices, or consent interfaces.
- Dangers of unintentional collection :
Although in principle, the GDPR protects against the collection of data without consent, some companies can circumvent these rules unintentionally or intentionally, especially with complex AI systems. For example, anonymous or aggregated data may be collected without consent, but this data may be "re-identifiable" in some cases, especially if cross-referenced with other datasets.
- Behaviour tracking and cookies :
Many AI systems are used to analyse online behaviours through cookies or other tracking technologies. Consent is required for tracking via non-essential cookies (those that are not strictly necessary for the operation of a website). Internet users must give their explicit consent, often by means of a cookie banner. If a site or app processes your data through these AI systems without your explicit consent for the use of non-essential cookies, this contravenes the GDPR.
- Third-party data recovery :
In some cases, companies may obtain data via third parties (such as business partners) and use it to train AI systems. These third parties must have obtained the user's consent to share the data, and the company using the data must also ensure that the use is compliant with GDPR rules.
AI cannot collect your personal data without your consent, except in limited cases provided for by the GDPR (such as legitimate interest or execution of a contract). However, in practice, there are cases where AI data collection can be opaque or poorly communicated, raising concerns about full compliance with GDPR principles. To protect your data, it's essential to read privacy policies and understand consent settings on AI-powered platforms.
Are AI algorithms biased or discriminatory?
Yes, artificial intelligence (AI) algorithms can be biased or discriminatory, and this is a major concern in the development and use of AI systems. Although AI is often perceived as impartial and objective, several factors can introduce bias and discrimination into the decisions made by these algorithms. Here's why and how this can happen:
- Bias in training data :
AI systems, especially those based on machine learning, are trained on large amounts of data. If this data contains existing biases or historical biases, the algorithm will learn these biases and reproduce them. For example, if the data used to train a recruitment model comes from years when women were underrepresented in certain technical positions, the algorithm could unconsciously penalise female applicants. Another example is the application of facial recognition, which has shown racial biases. Studies have found that some facial recognition algorithms are less accurate at identifying dark-skinned people, as they have mostly been trained with images of light-skinned people.
- Algorithm design :
Algorithm designers can, often unintentionally, introduce biases in the choice of variables to be taken into account or in the objectives they set for the algorithm. For example, if a bank lending algorithm uses criteria such as address or credit history, it may indirectly discriminate against certain populations (such as minorities or people living in disadvantaged neighbourhoods), as these criteria may reflect historical social inequalities.
- Data selection bias :
If the sample of data used to train an algorithm is not representative of the real population, it can lead to biases. For example, an algorithm trained only on data from a certain region or a particular demographic group can malfunction when used on different populations. This under-representation in the data can lead to less accurate predictions for minority groups.
- Effect of the "black box" :
Many AI algorithms, especially those based on neural networks or deep learning techniques, are often referred to as "black boxes" because their internal processes are difficult to understand even by their creators. This can make it difficult to detect biases or discrimination in the operation of the algorithm. The lack of transparency also makes it more difficult to know why a specific decision was made, such as in cases where an algorithm denies a loan or recommends a particular action in healthcare.
- Reinforcement of inequalities :
If AI algorithms are used in sensitive sectors (justice, health, recruitment, finance), they can perpetuate or even worsen existing inequalities. For example, an AI system used in criminal justice could recommend harsher sentences for certain racial groups due to historical biases in conviction data. Similarly, credit systems that exclude people with limited financial history or low credit scores can disadvantage people with low incomes or those from marginalised minorities.
- Indirect discrimination :
Even if sensitive variables such as race, gender or sexual orientation are not explicitly used in the algorithm, other seemingly neutral variables can have indirect correlations with these characteristics and lead to discrimination. For example, using geolocation as a criterion to evaluate a candidate may indirectly discriminate due to residential segregation.
AI algorithms can be biased or discriminatory, often due to biased data, flawed algorithmic designs, or a lack of adequate oversight. These biases can have significant effects on vulnerable or marginalised populations. However, with proper practices, such as regular audits, better data representation, and transparency measures, it is possible to reduce these biases and make AI more equitable and ethical.
Why doesn't PlanningPME use AI?
PlanningPME has chosen not to use artificial intelligence (AI) based on its priorities, current features and business strategy. Here's why PlanningPME doesn't integrate AI:
Nature of user needs
- Simplicity and efficiency : PlanningPME users are often looking for simple and practical solutions to manage their schedules, without unnecessary complexity. AI, while innovative, can be perceived as unnecessarily complicated for tasks where standard tools are sufficient.
- Adapted features : PlanningPME already offers robust features for schedule management (resource allocation, leave management, etc.), and AI is not necessarily essential to meet the current needs of its users.
Personal Data Compliance (GDPR)
- Data sensitivity : AI integration often involves collecting, analysing, and processing large amounts of data. This may raise concerns about personal data protection and GDPR compliance.
- Avoiding legal risks : By not integrating AI, PlanningPME can avoid the risks associated with poor data management or algorithmic errors that could harm users.
Adaptation to the target audience
- Traditional users : PlanningPME users are often companies or organisations that prefer traditional scheduling management, without requiring advanced recommendations or automations. Adding AI features could be perceived as excessive or inappropriate.
No immediate need
- User priorities : current PlanningPME users have not expressed a demand for AI-based features.
- Perceived added value : In some cases, the integration of AI does not create sufficient added value to justify its development.
Strategic positioning
- Focus on human efficiency : PlanningPME prefers to highlight the importance of human involvement in schedule management, where users remain in full control of decisions, rather than delegating certain tasks to an AI.
- Company Vision : Target Skills, the company that publishes the PlanningPME application, has chosen to focus on proven and stable features rather than embarking on emerging technologies such as AI.
Mitigating the risks of AI
- Algorithmic biases : AI systems can introduce biases into automated decisions, which could negatively affect the reliability or fairness of the schedules generated.
- Reliability : AI can sometimes produce results that are inaccurate or not adapted to specific contexts, which could hurt user satisfaction.
PlanningPME does not use AI because the needs of its current users do not require it and because the company prefers to focus on proven solutions that are tailored to its target audience.
Dangers include excessive data collection, algorithmic biases, difficulty in enforcing the right to erasure, and lack of transparency in data processing.
Yes, but only if it complies with the legal bases of the GDPR (such as explicit consent) and applies security measures such as pseudonymisation.
By limiting data collection, anonymising or pseudonymising it, and ensuring its security through encryption and regular audits.
Users have the right to access their data, request its deletion, challenge automated decisions, and obtain explanations about the algorithms used.