Why Your Company Needs to Develop Good Practices in Artificial Intelligence 

In .Data & applied AI, Blogfest-en, Uncategorized by Baufest

Generative Artificial Intelligence (GenAI) holds great potential for enhancing productivity, efficiency, and decision-making processes within organizations.

Thursday 13 - June - 2024
artificial intelligence

It can also help employees focus on more strategic and creative tasks. However, without careful exploration and implementation, future risks could far outweigh the short-term gains. 

It is crucial for companies to address these risks to maintain brand value and avoid legal and financial complications. 

Current use cases of GenAI demonstrate that it can add value to many business areas. However, a recent McKinsey study revealed that organizations are still not diving into these waters with a long-term vision. The survey found that few companies seem fully prepared for widespread use of GenAI or the commercial risks these tools may bring. The results clearly show: 

  • Only 21% of those who adopted this technology said their organizations had established policies governing employee use. 
  • Only 32% indicated their companies were mitigating the most cited risk: inaccuracy
  • Only 38% said they mitigated cybersecurity risks, which ranked second in importance
  • A mere 25% were working to reduce intellectual property infringement (third most significant risk). 
  • Only 28% were working to reduce the fourth most relevant risk: regulatory compliance infringement. 

Other risks considered important by organizations in this research included explainability (39%), personal/individual privacy (39%), workforce displacement (34%), fairness and equity (31%), and organizational reputation (29%). 

What to Consider 

It is important to develop and invest in best practices when implementing artificial intelligence (AI) tools and operations. By exploring use cases with a set of guidelines in mind, your organization can “get on the train” of ongoing evolution while mitigating the risks lurking around the corner. The guidelines for developing a responsible and sustainable approach to AI projects are as follows: 

  • Avoid unguided employee behavior as they independently use AI for work. 
  • Prevent negative brand impact due to the use of unverified or inaccurate information in decision-making and public relations. 
  • Avoid legal consequences for unauthorized use of copyrighted material. 

Considering these premises not only reduces various risks but also constitutes an investment in long-term sustainability, avoiding future expenses related to litigation, fines, and reputational damage. 

Paying attention to these aspects of a responsible approach to AI projects will also help your company protect and enhance its corporate image and competitiveness. Conversely, not doing so may result in customer loss, decreased consumer trust, and lost market opportunities compared to competitors who consider these guidelines in their business strategy for AI use and development. 

Employees: Avoiding Unguided Behavior 

It is increasingly tempting for employees to use the capabilities of generative AI for research projects, to obtain answers or ideas, summarize lengthy articles, or create content. In fact, in a survey, more than half of the workers said they use generative AI at least once a week, and 15% use it daily. 

However, risks exist when employees are not trained and given clear guidelines on appropriate use. These risks include using incorrect information, violating data protection laws, or making discriminatory decisions. 

Among other things, employees should be reminded that GenAI tool responses “may be inaccurate and any information generated for commercial purposes must be verified.” 

For example, if GenAI models are trained with biased or inappropriate data, they could generate discriminatory and offensive content or provide faulty results. Therefore, it is important to have clear policies and procedures to address the use of this technology in the workplace “with specific uses that are permitted and limitations or restrictions outlined.” 

As an article from the World Economic Forum indicates, “generative AI is spreading so rapidly that many companies fail to install the necessary barriers.” For instance, in a study conducted in 16 countries, 84% of workers who use GenAI at work said they publicly exposed their company’s data in the past three months. Over 40% said they had seen incorrect results from GenAI, and nearly 10% had used the technology behind their employer’s back. 

Employers should train their staff on the appropriate and responsible use of artificial intelligence and the legal aspects involved. For example, employees should know “how to identify biases, make appropriate inquiries, and avoid publicly sharing company information.” 

Additionally, organizations should conduct periodic audits of these systems to ensure they are used in line with company policies and legal guidelines. These audits can “include data analysis to detect any patterns of discrimination or bias in decision-making as well as monitor employee use to ensure GenAI is used for legitimate commercial purposes.” 

It should be noted that in some cases, the employer could also be held responsible for the use (or misuse) of this technology by an employee. 

Avoid Using Inaccurate or Unverified Information 

In the McKinsey survey, data inaccuracy was mentioned as the primary risk. It is known that at the current stage of development, generative AI tools can lead to errors or incorrect data, as evidenced by the “hallucinations” detected in many chatbot responses. 

AI is increasingly used in decision-making processes that affect individuals and society. Therefore, verifying the validity and accuracy of information is becoming increasingly important. Additionally, the rapid advancement of algorithmic systems means that in some cases, it is not clear how AI reached certain conclusions, making company decision-making sometimes dependent on systems that cannot fully explain themselves. 

Poorly designed AI projects based on faulty, inadequate, or biased data can have unintended consequences. This is another key aspect when discussing AI risks and highlights the importance of adopting a responsible approach to AI projects. Particularly, the risk of inequality and discrimination. Companies must ensure they consider these factors when implementing AI developments to maintain user trust and protect their business reputation. 

For example, AI can learn and make decisions based on biased data, which can lead to inequality, discrimination, and injustice. If an AI system is trained with historical data that discriminates against a group of people, it could then replicate that information in its decisions. This must be avoided at all costs. 

To illustrate this, AI algorithms are now widely used in the HR departments of many companies to perform initial screening of candidate resumes. When these systems contain biases, they can lead to discrimination against certain groups based on gender or race, resulting in unfair hiring practices and hindering diversity and inclusion efforts. 

Therefore, companies must ensure that all information used to train AI systems is correct, fair, and representative of all people

Avoid Legal Consequences for Using Protected Material 

The legal implications of using GenAI are still unclear. The ambiguity surrounding the ownership and protection of AI-generated works creates challenges for companies using these tools. This includes concerns about infringing third-party copyrights through unauthorized reproduction of protected works or unauthorized creation of derivative works. There are also risks of infringing on the rights of databases used for training. Additionally, there are questions about the patentability of AI-generated inventions. Furthermore, organizations need to protect confidential information and manage compliance risks

In this regard, it is important for companies to first understand the risks and then stay informed and learn how to protect themselves. For example, if they use AI tools, they should seek protections in contracts and require service terms that confirm the proper licensing of training data that feeds the platform. 

Data Protection 

Another central concern related to AI use involves personal data protection. These systems need large amounts of data to train and provide insights, which can risk both privacy and data security. The risk is that personal information may be used inappropriately, affecting individuals’ privacy. Therefore, usage policies should involve data management and privacy guidelines in AI projects. 

Another concern when thinking about artificial intelligence is its impact on employment and the possibility that this technology will make many jobs obsolete. At this point, companies need to act responsibly and take charge of training and upskilling employees for new roles and the skills demanded by the current job market. 

Regarding AI, it is imperative to balance innovation with responsibility. At Baufest, we can help your company explore the most interesting AI use cases for your business with a responsible and sustainable approach centered on best practices. We can be the partner you need to address AI risks so you can harness its significant advantages and avoid future problems.