Kindwise meets large language models

By
Ondřej Veselý
March 25, 2024
6 min read
Share this post
nvidia card in dirt

Users often struggle with various primary questions, like:

  • What are the care instructions?
  • Is it safe for children or pets?
  • Is it weedy or invasive?
  • Why are the leaves turning yellow?
  • Is this bug good or bad for my garden?

Sometimes the bug/plant/mushroom identification is only the first step to solving the real problem.

We at Kindwise already cover the most common cases by providing various details about the identified species or disease. However, sometimes user’s use cases are so varied and unpredictable that this is not fully satisfactory. In this case, the user has to do his own research on the Internet in order to answer the primary question. Can we do more to help?

The best solution is a domain expert users can talk to. With the help of LLM, we can play such an expert.

Let’s call it Ask AI.

Power of integration and synergy

Today's LLMs can't do this alone. Most available LLMs are not able to work with images and the rest lacks the identification accuracy and their responses suffer from made-up “facts”.

Conceptually, we believe that to solution lies in the combination of these assets:

  1. deep learning narrowly focused computer vision models - to make the identification accurate
  2. curated vast plant/mushroom/insect/diseases knowledgebase - to make the facts accurate
  3. large language models - to cover various use-cases, add alternative interface and customize the results

As it happens, being curious and having all the assets under the hood allowed us to put it all together into a functional tech prototype demo on plant.id.

Well plan to release the similar demo on insect.id, mushroom ID and crop.health.

How does it actually work? For each identification we feed the LLM with the API response, provide the LLM with the context with a system prompt and let the LLM chat with the user.

How it looks like on our Plant.id demo

Feedback wanted

The purpose of this demo is to get your feedback on the integration for its reliability (we are testing 6 LLMs internally across 3 different providers). This will help us to:

  • Assess if it is actually useful
  • Tune the system prompts to capture the edge cases
  • Design a simple prompt engineering tool to allow clients to be in control of the chats

Plans for the future

We are planning to release Ask AI feature in Q2 2024 as a part of the Kindwise API portfolio for each of our products including documentation, pricing and long-term roadmap.

Unlike the current generic implementation in our web demos, the Ask AI is designed to be pretty flexible to use. You will be in control of the system prompt, defining the communication style, context, and chat’s identity/brand and role. Also, we plan to support the progressive disclosure UX - sending the individual words to the client one by one as we receive them from the LLM. Another planned feature is explicit multilinguality.

Let us know if you want to be part of the internal alpha-test group 🙌 

In the long term we may consider developing our own fine-tuned LLM to achieve even better results or production efficiency.

Share this post
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.