LLM’s preferred software architecture: based on four fundamental design principles from ArchGuard Co-mate

AI News2yrs ago (2023)update Aitool
0 0

While optimizing the AI-assisted architecture governance tool Co-mate for ArchGuard, it was discovered that some patterns were quite similar to previous designs such as AutoDev and ClickPrompt. Thus, considerations were made to determine the suitable design principles for ArchGuard Co-mate, resulting in the preliminary drafting of three principles.

As I was about to share about LLM + architecture within the company, I decided to tidy up four architecture design principles that are more suitable for general use. These will serve as a reference for my future designs of software architecture with LLM at its core.

LLM's preferred software architecture: based on four fundamental design principles from ArchGuard Co-mate

TL; DR Version:

User-intent-driven design. Design a brand-new human-machine interaction experience, build domain-specific AI roles to better understand user intent. In short, look for more suitable ways to understand human intent in interactions.

Context awareness. Building an application architecture that is suitable for obtaining business context in order to generate more accurate prompts and explore an engineering approach for high response speed. In other words, creating a Prompt engineering system centered around high-quality context.

Atomic ability mapping. Analyze LLM’s expertise in atomic abilities and combine it with the lacking abilities in the application to perform ability mapping. Let each AI do what they are good at, such as utilizing AI’s reasoning ability.

Language API. Explore and search for the appropriate new generation API that facilitates LLM’s understanding, scheduling, and orchestration of service capabilities. Examples include natural language as a “human-machine API” and DSL as an API between AI and machines.

As a reference architectural principle, it needs to be tailored for different scenarios. The above are just some initial ideas, and further research and practice are needed to refine it.

Introduction: Three Design Principles of ArchGuard Co-mate ArchGuard Co-mate adheres to three key design principles: 1. Functionality: Every element of the design must serve a purpose and contribute to the overall functionality of the product. 2. Simplicity: The design must be intuitive and easy to use, with minimal complexity and unnecessary features. 3. Aesthetics: The product must be visually appealing and well-designed, creating a positive user experience and a strong brand image.

Co-mate is built upon the analytical capabilities of ArchGuard and is constructed around the core of DSL and specification documents. Therefore, we have designed three preliminary design principles:

“DSL as a unified language. By using domain-specific languages (DSLs) to enhance human-machine interaction, efficient communication can be achieved between humans, machines, and between machines and humans.”

“Atomic LLM for Orchestration”. Utilize the atomic capabilities of language models (LLMs) to construct complex behaviors within DSL. This refers to the dynamic function generation based on LLM atomic capabilities mentioned in our previous article “Specification as Governance of Functions”.

A thoughtfully designed layered dynamic context. By dividing the context into different levels, LLM effectively handles complexity.

The overall relationship is shown in the following diagram:

LLM's preferred software architecture: based on four fundamental design principles from ArchGuard Co-mate

In Co-mate, we use Kotlin Type-safe Builder to encapsulate the basic function capabilities, so that LLM can arrange and manage functions based on documentation and specifications.

The original specification is as follows:

- None of the names in the code can begin or end with an underscore or dollar sign. - The naming in the code is strictly forbidden to use pinyin mixed with English, and is not allowed to directly use Chinese. Correct English spelling and grammar can make readers easy to understand and avoid ambiguity. - Class names use the UpperCamelCase style and must follow the hump form. Positive example: HelloWorld.

Example DSL is shown below:

naming { class_level { style("CamelCase") pattern(".*") { name shouldNotBe contains("$") } } function_level { style("CamelCase") pattern(".*") { name shouldNotBe contains("$") } }}

The process of converting the middle document into DSL is dynamically handled and generated by LLM (currently in progress). With this foundation, we will find that there is not much difference in architecture compared to our previously open-sourced LLM-based applications, except for the differences in the utilized abilities, and because interaction is not yet at our core. Therefore, I have added a new line: User Intent-Oriented Design.

LLM’s priority software architecture design principles

For developers and architects, LLM is both full of opportunities and challenges. These include how LLM can assist in architecture design, how to build architecture based on LLM, how to guide architecture design with LLM, and how to build software architecture centered around LLM.

LLM's preferred software architecture: based on four fundamental design principles from ArchGuard Co-mate

Different modes will have a significant impact on existing processes and software. Based on a series of explorations within Thoughtworks and the software architecture and summary based on LLM, I have rethought four principles:

User intention-oriented design.

Context awareness.

Mapping atomic capabilities.

Language API.

Please see the details below.

User-Intent Driven Design

As we are all familiar with, existing applications use Chat as one of the entry points for LLM, where the intention of Chat is to understand the user’s intent, such as “help me write an article introducing design principles.” The intent here is very direct, and to better express the user’s intent, it is necessary to deliberately guide the user’s input.

Here, different ways of guiding or packaging will be presented, such as packaging menus as commands, packaging commands as prompts, parsing based on user input into UI, and so on.

To better understand user intent, we need to consider designing a completely new human-computer interaction experience.

LLM's preferred software architecture: based on four fundamental design principles from ArchGuard Co-mate

Summary: By designing a brand new human-machine interaction experience and building domain-specific AI roles, we can better understand the user’s intentions. For example, in a chat application, AI can use natural language processing to understand the user’s intent and better answer their questions. In addition, we can also explore other interaction methods such as voice recognition and gesture recognition to improve the user experience.

Context Awareness

In previous articles, we have been emphasizing the importance of contextual engineering. Our original definition was: contextual engineering is a method that allows LLM to better address specific problems. The core idea is to provide LLM with some background information about the problem, such as instructions, examples, etc., to inspire it to generate the answers or content we need.

In the case of including business scenarios, what we need to consider is the software architecture that revolves around the context. For example, in ArchGuard Co-mate, our approach is to build a dynamic context through a layered method. The main reason for this is that our understanding of a user’s intent exists at different architecture levels, such as business architecture, technical architecture, and code.

LLM's preferred software architecture: based on four fundamental design principles from ArchGuard Co-mate

Summary: By building an application architecture that is suitable for obtaining business context, we can generate more accurate prompts and explore engineering methods for high responsiveness. This is the Prompt Engineering focused on high-quality context, such as in an e-commerce application where AI can understand a user’s shopping and browsing history to provide better shopping recommendations.

3. Atomic Capability Mapping

Initially, most applications integrating with OpenAI were designed to have LLM directly generate JSON or YAML format. However, after trying to generate about 3000 PlantUMLs, we found that 20% of the generated UMLs were incorrect and uncompileable. It is this scenario that made us question whether LLM is suitable for this kind of task.

Under architecture governance, we define it as: utilizing the atomic capabilities of LLM to formalize architectural knowledge, mapping and constructing governance functions, and dynamically measuring different scenarios.

In the daily business context, analyzing the LLM’s capabilities is also a critical aspect. We should avoid having the LLM perform mathematical calculations and instead integrate intentions with system functionality through methods such as Functions Calling.

So, we broke down the capabilities of LLM and integrated them with the system in different ways.

LLM's preferred software architecture: based on four fundamental design principles from ArchGuard Co-mate

Summary: We need to analyze the atomic abilities that LLM excels at, combine them with the abilities that the application lacks, and conduct an ability mapping. Let each AI do what it is good at, such as making use of the reasoning ability of AI. For example, in a smart home application, AI can automatically adjust indoor temperature, lighting, etc. based on the user’s behavior to provide a better home experience.

4. Language API

After discussing with several architects, we have almost reached a unanimous decision: we need a new API that is suitable for LLM.

It may be a type of language-based API. For humans and machines, and for machines with machines, it is familiar to us as things like JSON, YAML, or other custom DSLs. For humans and machines, this language API is natural language or graphical means.

After visualizing our software architecture, you will notice this particularly clearly.

LLM's preferred software architecture: based on four fundamental design principles from ArchGuard Co-mate

Summary: We need to explore and search for suitable next-generation APIs to facilitate LLM’s understanding, scheduling, and orchestration of services. Examples include natural language as a human-machine API and DSL as an API for AI and machines. For instance, in an online customer service application, AI can use natural language processing to understand customer issues and automatically assign them to different customer service representatives based on the type and urgency of the issue.

Summary

Summary from the immature Notion AI:

This article introduces the software architecture design principles based on LLM, including user-intent-oriented design, context awareness, atomic capability mapping, and language API. By designing a brand new human-machine interaction experience, building an application architecture suitable for obtaining business context, analyzing the atomic capabilities that LLM excels at, and exploring and finding suitable new generation APIs, LLM can be better utilized to assist architecture design, build architecture based on LLM, guide architecture design with LLM, and construct software architecture centered around LLM.

© Copyright notes

Related posts

No comments

No comments...