How to Vet AI Tools

By April 15, 2023 June 8th, 2023 AI, Blog

7 Questions to ask when vetting AI tools and products.

1. What problem does the AI app solve?
For any tool or application to be useful, it must provide a solution to a real and relevant problem.
To understand what problems the AI application solves, ask the following questions;
Who faces the problem in the organization?
Are there resources being used to solve the problem without AI? If yes, at what price?
What value comes out of solving the problem e.g monetary benefits, comfort?
Can the problem be solved efficiently even without AI? If yes, how much time if that effort without AI?

2. Is essential data accessible?
AI-based applications will always need some level of data. This raises the question of where the data will come from, and whether the data is ready for use or needs to be cleaned up. Ask yourself, what data you will allow to be accessible to the AI. Be mindful of intellectual property and privacy data.

3. How much is the AI tool or application?
The app might be offered as a one-time purchase or a subscription.
Find out how much expertise and resources you will need to run, maintain and improve the AI based solution.

4. How does the product deploy AI?
Many companies have added AI as a buzzword to their products when in reality, it could simply deploy automated analytics processes and workflows.
Is the AI tool browser base, does it need to be downloaded and run on your computer? Be cautious if it needs to be downloaded or run on your computer.
When this is not clear, follow up with the developer or company and ask them to provide this information.

5. What features does the product have?
The features of an AI-based app or website are key in delivering value to your organization. You need to test the features of the AI product and see if they work – the ‘try before you buy’ rings true here.
You can do this if the app or website is online. Sometimes, the AI company might need to set you up with a demo account so you can access the features.
Integrations and APIs are key aspects to look out for. This is because no tool works alone. If the AI-based software is a chatbot, then it should have the infrastructure in place to integrate with common business tools such as CRMs, webpages, and even social media.
Understand the installation and deployment process, and how it plays out in real-world applications. Then you’ll have a good idea of how well the AI tool is.

6. What are the results achieved and what is the accuracy of the AI tool?
Find out about the successful and failed deployments of the app by asking the company. If the app has been in the market for a while, you’ll also expect to see client testimonials, ratings, and case studies.
The absence of testimonials, ratings, and case studies doesn’t necessarily mean the app doesn’t work. However, cover your bases by asking the company if they have any case studies.

No AI is 100% accurate. That’s why it’s important that you have an acute understanding of how accurate or robust the AI tool is in predicting outcomes or providing recommendations. Ask the company for this info since they tested the AI against historical data and competing solutions when developing the application.

7. How does the application work?
Now that you know what problem the website or mobile app solves, you can proceed to test the application.
Data is at the heart of any AI solution:
All applications that deploy some form of machine learning or artificial intelligence use data. In some cases, data is the intellectual property that can give the application a competitive edge.
Find out what data was used to train the AI models. If the AI was trained using insufficient data, the output might look okay internally but when the AI solution is deployed in the wild it won’t achieve desired outcomes.
Also, ask the AI company about the sources and volume of data used to train AI models. Other information you need to find out about data includes:

Management: What resources are needed to collect and manage the data?
Scalability: What will it take to scale the application from a data perspective?
If you can, go a step further and test the application with your own data.

The questions above are a great starting place when evaluating AI tools, applications, and products.

Make sure your company has an AI policy and educate your employees on the good and bad of using AI tools. This is important for protecting the organization from a potential breach.

Who Owns the Copyright to Generative AI Outputs?

This is a common question I get. This is directly from Congress, read here.  https://crsreports.congress.gov/product/pdf/LSB/LSB10922#:~:text=AI%20programs%20might%20also%20infringe,created%20%E2%80%9Csubstantially%20similar%E2%80%9D%20outputs.

Assuming some AI-created works may be eligible for copyright protection, who owns that copyright? In general, the Copyright Act vests ownership “initially in the author or authors of the work.” Returning to the photography analogy, the AI’s creator might be compared to the camera maker, while the AI user who prompts the creation of a specific work might be compared to the photographer who uses that camera to capture a specific image. In this view, the AI’s user would be considered the author and, therefore, the initial copyright owner. The creative choices involved in coding and training the AI, on the other hand, might give an AI’s creator a stronger claim to some form of authorship than the manufacturer of a camera.

*Do your own research. I am not an attorney nor do I play one on tv.

Ethics and Responsibility of AI Implementation

Sample guidance that the government is using as it relates to ethics and responsibility of AI.
https://coe.gsa.gov/coe/ai-guide-for-government/responsible-ai-implementation/index.html

Vetting AI products can be challenging but highly rewarding

(This is how ChatGPT answered the question)
If you take the time to find out as much as possible about an AI app or tool, you can make a wise investment decision.

The most robust test you can perform to verify the viability of AI solutions is to test the product multiple times. Test using multiple data sets and different scenarios to determine how well an application deploys machine learning. How you feel about the product upon testing (and asking yourself the below questions) will give you insight into whether the AI product is right for you or not.

When vetting artificial intelligence (AI) tools, there are several questions that you can ask to ensure their legitimacy and address potential security or privacy concerns. Here are some questions to consider:

What is the purpose of the AI tool, and how does it work?
Understanding the tool’s purpose and mechanics is important for assessing its usefulness and determining whether it aligns with your needs.

Who developed the AI tool, and what are their credentials?
Researching the developers and their track record in the industry can give you an idea of their expertise and reputation.

What data does the AI tool require, and how is it processed?
Understanding the data inputs and processing methods can help you identify potential privacy and security risks.

Does the AI tool comply with relevant data privacy regulations, such as GDPR or CCPA?
Ensuring compliance with relevant regulations can help mitigate privacy and security risks.

How is the AI tool tested and validated, and what are the results?
Reviewing the testing and validation results can help you assess the tool’s effectiveness and reliability.

Are there any security or privacy risks associated with the AI tool, and how are they addressed?
Identifying and addressing potential risks is essential for mitigating security and privacy concerns.

What kind of support and training is provided for the AI tool, and what are the costs?
Knowing the level of support and training available, as well as the costs, can help you determine the tool’s value and assess the feasibility of implementation.

By asking these questions, you can better vet AI tools and make informed decisions about their use.