Softlandia background

Softlandia

Blog

AI Architecture: The Foundation of Effective Applied AI Solutions

In recent years, the development of modern AI applications has heavily focused on leveraging generative AI. Numerous large language models are available and can be easily utilized for a variety of purposes. However, these models have a significant limitation: they lack access to confidential or recent information published after their training.

Designing and implementing AI architecture is crucial for overcoming these limitations. In this post, we will examine how software engineering complements AI and why applying AI requires strong software expertise. We will also discuss the use of AI software frameworks, their advantages and challenges, and the importance of adopting an AI-first mindset for companies. Through these themes, we will explore how businesses can develop their AI capabilities and ensure the quality and sustainability of their solutions.

Software Engineering Complements AI

You may have heard that software often compensates for the shortcomings of hardware. The same logic applies to the data issues of generative AI: software solutions can enhance AI models and make them more effective.

One solution that has emerged is the so-called RAG architecture (Retrieval Augmented Generation), which can be loosely translated as “search-assisted generation.” This post will not go deeply into the technical details of this AI architecture, assuming that the reader is familiar with the basics, which can be reviewed in my previous post about RAG.

Good, simple examples of RAG implementations include customer service chats that use databases of previously solved problems as a source of information. In practice, the best-matching solved cases are retrieved from the database based on the customer's query, and the language model uses these sources to form a response. RAG minimizes the model’s hallucinations and provides accurate, data-driven answers.

Most applied AI solutions that use generative AI models and external data likely have some form of RAG implementation at their core. For this reason, integrating RAG either as a built-in part of a product or developing a more comprehensive AI engine is now on the to-do list for nearly every company.

Applied AI Solutions Require New Expertise

The software and cloud architecture of an AI solution must be designed rationally, considering AI requirements. A comprehensive understanding of software engineering, cloud, and AI is critical for implementing a high-quality, functional solution.

Applied AI is a combination of software engineering, AI/ML and cloud skills.

This represents a new area of software engineering, applied AI engineering (often referred to simply as AI engineering), which you can learn more about from our applied AI services and the description of an applied AI engineer. Despite the term AI, the practitioners' expertise leans heavily on software skills with a strong understanding of AI, algorithms, and related application layer methods.

Most companies cannot train or have the capacity to hire software engineers specialized in applied AI. Therefore, many companies' software development teams have adopted ready-made AI software frameworks to easily build new AI features into their software. This seems like a quick and cheap path to success, doesn’t it?

Do You Own Your AI Architecture?

Over the past year, there has been a lot of critical discussion about AI software frameworks (also known as data frameworks or LLM frameworks) like Langchain and Llamaindex. A typical problem is that they contaminate the entire software solution architecture by forcing it to use the framework data structures and methods. Additionally, they hide all the logic under several abstraction layers, which will eventually need to be modified when working with proprietary data.

Using frameworks can also be beneficial, and they often speed up implementation, especially in rapid experiments and proofs of concept (PoCs). They are valuable when evaluating different RAG architectures and methods against proprietary data. The best-suited method for leveraging proprietary data can be identified using frameworks, and from there, a better custom solution can be developed.

In production implementations, however, frameworks likely create more problems than they solve. Applied AI solutions still require significant customization for each use case. The most value-adding core logic is rarely outsourced to software frameworks because the core of the company's services or product must be under the internal development team’s control. Just as you own your product's software architecture, integration architecture, and data architecture, you must also own its AI architecture.

Despite this, many have adopted ready-made open-source RAG implementations to simplify the process. Gradually, it has been realized that applied AI may require more than just a new software framework.

Software Frameworks Cannot Fill Gaps in Applied AI Expertise

The retrieval phase of RAG always works very close to the company's or product’s data. The retrieval, potential indexing, and modification of this critical business data should be entirely under the company’s control. A well-functioning retrieval lays the foundation for a functional RAG: high-quality retrieval results produce high-quality answers. Forcing retrieval functionality into the ready-made structures and methods offered by AI frameworks makes implementing high-quality retrieval difficult, if not impossible.

Given the popularity of frameworks, they are used extensively, and I dare say that many low-quality GenAI implementations result from their use. Frameworks are not inherently unusable or bad, and especially the creators of Llamaindex are doing groundbreaking work with new methods and capabilities. The root causes of problems lie elsewhere.

The mechanics under the hood of frameworks, such as retrieval methods, are generic, “one-size-fits-all” solutions that rarely fit directly into every company’s use cases. Additionally, frameworks conceal components like vector databases and the formation of embeddings (vectors calculated from text). Evaluating the quality of retrieval and responses is often entirely or partially overlooked when using frameworks. Quality may be sacrificed for low cost and quick implementation, but also ignorance of applied AI and the belief in easy implementation contribute to poor outcomes.

Ready-Made Frameworks or Customized Solutions?

When designing software and AI architecture, it must be decided whether to use a ready-made software framework or to build everything from scratch. What if only one or two smaller specialized libraries are used, and the rest is developed in-house?

While developing our private GenAI product YOKOT.AI, we noticed Langchain's problems at a very early stage: we could not apply language models to our use cases due to the framework’s limitations. We removed it from the product before the first production release in the summer of 2023 and replaced part of it with smaller specialized libraries. This allowed faster product iteration and the implementation of new custom AI capabilities.

LinkedIn, on the other hand, found that when implementing an AI feature, 80% readiness was quickly achieved in a month or two, but the remaining 20% took almost six months even for experienced AI developers. Taking applied AI into production use is thus significantly more demanding than adopting a single framework and calling a few ready-made functions. This, of course, does not only apply to AI solutions but to high-quality software development in general.

The implementation approach for AI-enabled applications is often described using two terms. Lightweight, quickly made AI solutions that often use ready-made frameworks are called thin wrappers, while customized, advanced AI architectures are so-called thick wrappers. The wrapper refers to the application wrapping around one or more generative AI models. Thin wrappers do not add much value to their users as they do not introduce new functionality on top of the models, except for some auxiliary functions or user interfaces. Thick wrappers, on the other hand, focus on adding value by creating new AI capabilities through innovative combinations of software and AI models.

AI-First Mindset

AI has created a new area of software design, and mastering its nuances requires learning. A senior-level backend developer can get a preliminary grasp of various GenAI methods and architectures after six months of intensive development cycles. By this time, however, a slew of new methods and libraries have emerged, and the pace only accelerates. While frontend development often feels intensive with its continuous changes, working with applied AI involves learning at a much faster pace.

It is high time to adopt an AI-first mindset: AI should be treated as a core functionality of the software solution rather than quickly added on top of the old using a ready-made framework. The change process is not simple because, at worst, the comprehensive implementation of AI architecture requires changes almost everywhere: in all layers of the software and the mindset of software designers. Well-implemented previous software architecture helps in this and facilitates AI integration. When designing new solutions, the architecture should consider the current and future requirements of AI.

Embrace Applied AI Today

AI architecture cannot be created without strong expertise in software, cloud, data, and integration architectures. While using ready-made AI software frameworks may be a quick and attractive solution, their limitations often become apparent in production environments. The structure, quality, and availability of the company's data significantly impact the implementation, not to mention the rest of the infrastructure. The closer AI is brought to the application layer of software, the more critical the deep software engineering knowledge is for the success of the AI project.

Softlandia's applied AI expertise enables rapid AI integration to end-user applications.

Adopting an AI-first mindset and mastering AI architecture are critical factors in successful applied AI solutions. Companies must develop their capabilities and understanding of AI and software solutions integration to ensure that their solutions are both high-quality and sustainable. This way, the full potential of AI can be harnessed, and competitiveness maintained in an environment where AI usage is rapidly becoming essential for businesses.

The saying “all companies are software companies” can be transformed in the age of AI: all companies are both software and AI companies.

Contact Us