How to approach AI decisions –

We all know that sometimes in business we have to step out into our own safe worlds to make decisions to do what needs to be done; however, it’s not always easy to change the way you work. We want to embrace change and innovation, but we also want to make sure that the investment delivers more value than it costs. This is what happens when we explore new technologies and deploy software to improve our companies (and our lives). And AI is no exception.

How to decide if AI is worth investing in?

AI is software. Despite the marketing hype or romantic idealization, AI is still just that: software. Like any other software decision, the main driver for deciding whether the investment is worth it is to see if it can help you make or save money. In the AI ​​domain of unstructured data and natural language understanding and processing (NLU/NLP), there is a wide range of processes and functional tasks that can be automated and even taxonomied to solve use cases such as contract analysis, email management or customer sentiment; AI can certainly bring opportunities, but to determine your solution, it’s up to you to look at the challenge or problem you want to solve.

Business Execution vs. AI Model Execution

In the business world, where 60-80% of AI projects fail, it’s legitimate to ask yourself if AI is worth more than it’s actually worth. But it’s also wise to ask yourself why organizations have been able to stand out from failure and how they’re driving their 40-20% AI projects to success.

Software projects are successful when there is a clear focus on the right expectations. Forget AI model execution. rather, set the right expectations and focus on them. It’s not data scientists who can decide how/if AI is worth it, because they look at model performance to see how well the algorithms do with a given data set. It is the line of business that has a business problem or opportunity and benefits from what can be accomplished with a successful implementation of AI.

Scaling up doesn’t just mean rendering

Today we are seeing major investments in machine learning, deep learning (ML, DL) and large language models (LLMs) from some of the world’s largest technology companies. Such approaches rely on statistics and pattern recognition and require huge amounts of data to run. When applied to language, this means that they make predictions based on the presence, position, and frequency of a keyword or pattern in text. But usually, the enterprise language data is insufficient to train the models (or there are not enough time and resources to train them), not to mention that the main challenge is the language itself, because its nuances do not provide enough consistency and predictability. This is one of the most relevant reasons why symbolic AI is being integrated with ML, DL (creating a so-called “composite AI” or “hybrid AI” approach) to provide the best of both AI worlds: brains that improve with each application.

With symbolic AI, you can assign meaning to each word based on built-in knowledge and context. In fact, it does not try to predict anything. it tries to mimic how we understand language and read content as humans. The business advantage that sets token AI apart from learning approaches is that you can work with much smaller datasets to develop and refine AI rules. When combined, symbolic AI and learning approaches complement each other to deliver the best results for NLP applications.

AI trade-offs and costs

Other benefits of complementing symbolic AI strengths to ML/DL include explainability. Learning approaches are defined as black boxes; this means that once a model has been trained, it is impossible to see why it behaves in a certain way. There is no way to correct for any bias or remove questionable, controversial results, because we don’t know how the algorithm arrived at the result or why it produces a certain result. Because they are not explainable and you can’t understand the logic behind them, you are forced to go back to trying to get the expected results, repeating the cycle inefficiently, because that’s usually what you need to do, or add more data. to it or tag much more data. This is a particularly tough trade-off to think about when approaching AI decisions in natural language. However, with ML/DP you can target parts of the problem where explainability is not necessary, while symbolic AI that can reach conclusions and make decisions through a transparent process (explainable AI) does not require significant amounts of training data. that ML does.

AI that is responsive and explainable

According to a recent survey, “As AI failures put companies and their customers at risk and regulatory attention increases, evidence points to the value of developing RAI (responsible AI) policies even before an AI system is implemented.”

There is no single steel-clad definition of responsible AI, but as the technology matures and as customers themselves become more aware of issues related to the environment, social responsibility, fairness and privacy, there is a growing acceptance and understanding that AI required: include transparency (also expressed as explainability or accountability), sustainability (low carbon), efficiency (practical AI), and a human-in-the-loop approach (AI should be understandable to humans and not replace humans, but humanize the work we do; (allowing us to be more efficient, effective, and happier both in our work and in our lives in general.) The combination of these four aspects is a practical framework that helps you think through the full range of trade-offs and costs of AI to define value. : You want to create as well as the AI ​​approach and work needed to scale and keep your AI system running.

Source link